Physical Computing – Dustyn Roberts

CLICK to go to to documentation of Midterm Project
CLICK to go to documentation of Final Project (combined with ICM)

Blogs

Week 1 Readings:
Bret Vitor’s “Brief Rant” started out with a video (disturbing) of a future “full of interactivity!” Except it wasn’t, really, because no person in the entire montage (ok, I stopped watching 2/3 of the way through because I couldn’t imagine how anything I saw was going to look much different from the last thing) ever interacted with anything…except maybe the ground (gravity) and a car seat (also gravity). The only other people present were either innocent non-involved bystanders, or poor men charged with opening doors so women with handheld devices could continue smoothly down their paths of “digital interactivity” without having to bother with anything lame and old fashioned like “hello” or “thank you.” All that is to say, I get where Bret Victor is coming from. And I agree, a handheld display with no tactile surface to speak of and no interactive ability besides sliding a finger is pathetically limited. It reminds me of that scene in Wall-E where you first see “future humans” – blobs of fat with basically no hands or feet who sit all day and receive entertainment input and various nutritious liquids through a variety of cranio-facial orifices. I would take it a step further than Bret mentions in his rant, though. I think interactive devices should not only interact with the more of the human frame than the index finger, but also should improve human to human interactivity. (Pointing a handheld device at a hologram – live streaming video? – of a man and sending him virtual money doesn’t count.) I’m talking about personal connection – learning and experiencing other people and building relationships.

Chris Crawford begins his book with some ideas about what interaction means and doesn’t mean. At first I was thinking about an interactive computer application (for example), and I thought, “well, if the computer application can in some way change the way you experience the world, then maybe it’s interactive.” But then I started thinking about how just reading an interesting article about something random on Wikipedia could change the way you experience the world. And that’s not really interactive because you’re not giving anything back to the computer – there is no “conversation” (unless, I suppose, in the case of reading a Wikipedia article, you add some information the previous authors left out…but that’s still not very interactive as far as interfacing with the computer itself goes).

So then I started thinking about how a computer application could actually have a conversation with a human. Sure enough, then Crawford talks about what are the essential elements in conversation. So a computer could give you information or feedback that could intellectually or physically change your experience of the world…what reaction could you have that would in turn change the way the computer functions indelibly? From there, the computer should (by nature of being permanently different in some way and therefore creating totally new artifacts) take the next step in the conversation, starting the whole cycle over again. I’m not a techie, so honestly I’m not really sure at this point what kind of application would do that. But I’m ready to keep thinking about it.

As far as vouching for interactivity as a learning tool, I will give an example of myself going through my Arduino kit and preparing for this week’s labs. I sat and read the page about connecting the voltage regulator and wires to the breadboard over and over and over. It was dry and totally new to me. I was tired and was getting nowhere. So instead, I asked a friend just to show me and within 5 minutes I had everything connected on my own board, understood how the electric current flows through the components, and had it all documented and ready to go for lab on Wednesday. Having that human interaction was invaluable to me for learning that simple task.

I am hoping to do work that turns the continual scary/negative/accusatory/fear-mongering buzz coming out of the entrenched environmental world into a stream of interestingness/mind-bends/positivity/inspiration for people to really connect with and take away meaning from. I think interactivity with computers that help people interface with nature in real time could be a fun and exciting way to do that.

Week 1 Labs:

(Electronics Steps 1-5 done in class)

Electronics Step 6:

Electronics Step 7:

Electronics Step 8:

Electronics Step 9:

Video of Electronics Step 10 working:

Video of Switches Lab (Making Your Own Switch)

—————————————————————————————-

Week 2 Readings

Donald Norman’s The Design of Everyday Things discusses poorly designed (versus well-designed) objects. It focuses on poorly designed ones, and does not give an example of a well designed technology. My reaction: is there a range where design quality can be determined objectively? Would the answer for a given object differ between people (users vs. designers, for example)? Norman discusses this latter point briefly in pointing out that sometimes the design of an object is not just the result of aesthetics or ease-of-use analysis but also by economics/manufacturing cost; what might be the most advantageous design for a manufacturer might not be optimal for the end user, and visa versa.

My grandfather, a slightly curmudgeonly old Swedish man, constantly exclaims, “WHO DESIGNED THIS??” about everyday objects. Last time I was at his house, he was complaining bitterly about the swing direction of his microwave door. He spent several minutes explaining and demonstrating why it is ridiculous that most microwave doors are manufactured to swing in the same direction, severely handicapping users like himself who want their microwave to be placed in a certain (less common) way.

My immediate and overwhelming reaction to things that are immediately apparent to be badly designed is to simply avoid using them at all – for example, the refrigerator example in Donald’s piece. If that were my refrigerator, I would have taken one look at the instructions, immediately decided they were too complicated and made the executive decision that I’d just never adjust the temperature and deal with the consequences (frozen fresh food or melted frozen food – less stressful than figuring out the inexplicable instructions for temperature change).

With many gadgets that are too complicated to use, users develop work-arounds to accomplish some of the desired tasks in different, easier ways.

A few key terms/ideas from this reading:
mappings – between what you want to do and what appears to be possible

affordance – perceived vs. actual properties of a thing

feedback – receiving a signal back from the technology that an action has been carried out successfully or unsuccessfully.

An example of a technology that gives no feedback is the Terminal used for dealing with Git. I attended the Git/Github workshops, which involved using Terminal to code. One of the frustrating features of Terminal is that if code is entered correctly….nothing happens. “Nothing” is supposed to be an affirmative notice that one has performed a task correctly. This is unintuitive and confusing.

“Whenever the number of possible actions exceeds the number of controls, there is apt to be difficulty.” I have a great example for this: the lack of a right mouse button on an Apple computer. I am a lifelong PC user, only have just purchased my first Macbook this month. I love the new computer, but I still find it very annoying that I can’t right click on anything. Using both hands for simultaneous key presses is inefficienct and interrupts the process of using the keyboard and mousepad. Perhaps it is more attractive to have a solid, uninterrupted mouse pad, but it is far less functional.

“The good relationship between the placement of the control and what it does makes it easy to find the appropriate control for a task.”

It is important to achieve a balanced, useful point between under- and over-visibility for controls of machines.

“Emotion & Design: Attractive things work better”
This article, also by Donald Norman, was written seemingly in response to commentary on his book The Design of Everyday Things, which criticized Norman for writing “against” beauty in design. Although I haven’t read the commentary itself, I didn’t get the impression from reading Chapter 1 of the book that Norman had an agenda specifically against attractive design. Not many of the examples he used in the chapter to illustrate badly designed items were of particularly beautiful things – the glass doors are an exception, as he did specifically point out that they were designed for aesthetics and not practicality of use. The other items, such as telephones, refrigerators, and car radio consoles, were not items specifically designed for aesthetic qualities.

However, I did feel “Emotion & Design” to be a little convoluted. Norman uses the teapots as examples of different items whose aesthetics and usability don’t align. First of all, Norman doesn’t discuss the subjectivity of judgments of aesthetic pleasure; this would have been relevant because his teapot example hinges completely on his own judgment of the teapots’ attractiveness. The example wasn’t meaningful to me, for example, because I disagreed with his evaluations. Obviously the pot with the handle in front is unusable (and in my opinion, not pleasing to look at), as he says. However, he names the strange glass pot as the one with highest usability and lowest aesthetic value, and the tilting pot as the one with the highest aesthetic value. This I disagree with – I actually like the way the strange glass pot looks and dislike everything about the tilting pot. My point is that any discussion of aesthetic value has to be rooted by a discussion of relativism.

I would also argue that to a lesser extent so does a discussion of usability.

Interactive Technology Example: Elevator
I chose this “simple” technology because I spend a lot of time thinking about how certain elevator systems seem to be unable to function in a remotely efficient way. For a technology that has been around for several decades, it is not clear to me why this should be the case. The elevators in this very building are an example of a set that do not serve well.

The way I would assume elevators should function:
Perhaps after a certain period of disuse (say, several hours) the car should return to the ground floor of the building, because it could be presumed that the users of the building are out, and will be returning from the outside on the ground level. The car will then be readily available to take on passengers from that ground level. After acquiring passengers, the elevator should then travel upward, stopping at subsequent requested floors in upward order, until it has reached the highest requested floor. Where should the elevator go from here? Should it should stay in place, waiting for a call from elsewhere in the building, which could come from anywhere? Should it automatically descend again to the ground floor, assuming there will be a new load of passengers arriving soon who want to travel to higher floors? I’m still not sure which option would be most efficient, or the most usable.

The way elevators actually function:
I observe that at certain times of the day, the elevator waiting area is very crowded, and passengers wait for a relatively long time (sometimes 4 minutes or so) for an elevator to arrive. This is particularly frustrating when the elevator (whose location is visible via the lit up floor numbers of the door) appears to be traveling back and forth, back and forth between a variety of floors without descending to the ground for a long period of time. Adding to the frustration is when the elevator finally arrives on the ground and only one person finally disembarks. Of course the elevator could have been very crowded and unloaded many people on higher floors in its circuitous journey, but there is a sense of pointlessness for riders waiting at the bottom who can’t see anything except that the elevator seems to be wandering aimlessly and interminably in the upper reaches of the building.

I don’t have a good alternative suggestion for how to avoid these situations, but I do know that not all buildings with elevators seem to have this problem. Designers of elevators and the programs that tell them what passenger commands to respond to and in what order must have varying methods, with some being more efficient and some less.

Week 2 Labs:

Digital Input and Output with an Arduino

Analog In with an Arduino


———————————————————————-

Week 3 Readings
Reaction to “Physical Computing’s Greatest Hits (and misses)”:
Since I’ve visited ITP during both the last winter and spring end-of-semester shows, I have seen many projects that fall into the categories mentioned in this article. I will attempt to remember a few and then to mention a few “real world” products that do too.

One project from the more recent spring show was called “Tornado Alley” (I think that is correct) and involved sensors for wind speed that translated the input into physical motion, spinning miniature wooden houses on the tops of metal stalks. I think this project would fall most closely into the category of “Things you yell at.” Although the input feed isn’t voice sound, but air movement – sound is just the compression of a wave traveling through a fluid (like air), and wind is also the movement of air – the basic idea is the same. The project was fun to play with, because the user felt like only a mild input could have an effect that seemed physically impossible without understanding the computing that was happening below the surface. Watching your breath seem to spin a whole row of wooden houses was kind of magical. The project itself was really more a demonstration of using air pressure as an input for amplified kinetic motion, but I can see application extensions in other technology fields.

Another project I distinctly remember from the winter show of 2011 was a box of water with heartbeat sensors implanted in each corner. Two users were to stand on opposite sides of the box, press their hands into the heartbeat sensors, and watch as the sensors translated the energy of the heartbeat into vibrations that caused the water in the box to ripple away from each side, meeting the other user’s set of ripples in the center. I’m not sure what category from “Physical Computing’s Great Hits (and misses)” this project would fall into…but it was extremely cool. I wish I had had more time to talk to the maker about what his personal take away from the project was. I remember asking if they were photographing the patterns formed by the ripples in the sand that was at the bottom of the box to give to each user, but he said they weren’t but that that would be interesting.

While I understand the purpose of play and experimentation and allowing these serve as the ultimate “so-what” of a project, I tend to gravitate toward projects that either have or could have an application. I really liked the project example in the Greatest Hits article that assisted a physically challenged artist in interfacing with his materials in a better way. There are so many tech gaps in the world that it will be hard for me to put a lot of energy and time into a project that I can’t see extending practically.

As we’ve discussed through previous readings and in class, a lot of the “interactive” tech we see this days focuses solely on the hands as the brain’s interface between itself and the tech. Although thinking relativistically, our Smartphones, etc., are amazing compared to what people used only 10 years ago, we obviously have a long way to go before we’ve accomplished more meaningful interactivity. The other interaction that seems to be prevalent in common technologies is motion sensing. We are constantly interacting with doors, lights, and other objects that respond to our physical presence by sensing our movement and opening, turning on, etc.

I am interested in smart buildings, which use biosensors in addition to more traditional motion sensing to adjust environmental factors to maximize comfort and efficiency. These technologies are ambient, i.e. the user does not always intentionally interact with them – instead they interact unconsciously by supplying their body temperature, heart rate, and even brain wave activity as data to the computer system and then enjoying the environmental changes (room temperature adjustments, lighting changes, etc.) that the system affects. These technologies are being used specifically in aggressive energy efficient building projects because they are able to take human behavior (more specifically, the difficulty in affecting intentional/conscious behavior change in individuals) out of the equation when it comes to reducing energy use.

Week 3 Lab 1: ServoMotor

Week 3 Lab 2: Photoresistors and Speaker (disregard incorrect title on video)

—————————————————————————
Week 4 Readings:

Visual Intelligence by Donald Hoffman
This article, describing the surprising brain mapping phenomena in amputation patients, presents the idea that our brains construct how we experience our physical world…and that experience is tantamount to reality for an individual.

I have my own experience to add to this conversation. I was reminded by the portion of the article discussing “cortical plasticity.” In other words, parts of our brains not conventionally associated with various parts of the body seem to be involved with the sensations coming from those body parts; just because we might lose a particular body part, that part of the brain associated with its feeling remains. I am not an amputee, but I also experience some overlap in sensations/perceptions due, according to research, to overlapping circuitry in parts of my brain. I have synesthesia, which is a condition (or ability, depending on how you look at it) in which multiple senses overlap, creating a heightened experience of certain input. In my case, I associate colors with letter and number characters, and to a certain extent, the sounds letters make in speech. I was unaware this was unusual until I read an article about synesthesia about ten years ago in a science magazine. I thought everyone associated colors with letters and numbers. For me, this ability not only adds a extra element of experience to my world, it also helps me spell, remember strings of numbers, and spot errors in text. It also adds an element of emotion to words, phrases, and numbers depending on my taste for the colors they represent in my head. Researchers have worked on understanding the various forms of synesthesia (there are people who experience many combinations of senses, often more than two at a time), and have postulated that the phenomenon is the result of neural circuitry from disparate parts of the brain responsible for different kinds of sensory input overlapping and exchanging information.

The article also reminded me of a TED talk I’ve watched a few times. The speaker is Jill Bolte Taylor, herself a neuroscientist, a stroke victim. In the video, Jill describes in great detail her experience of actually having the massive stroke that completely disabled the left side of her brain – the side that houses our rational “selves,” and helps us think logically and understand and use language. During her stroke, Jill experienced total takeover by the right side of her brain – she became unable to distinguish the separation between her body and surrounding environment and became, temporarily, an animal of pure feeling. She even says this experience was strangely peaceful until the left side of her brain might flicker back into action, warning her that something highly unusual and dangerous was happening to her body. I very much recommend watching the video:

YouTube Preview Image

All this is to say, this article on cognitive mapping in relation to amputees was an interesting and new exploration of cognitive plasticity and the role our brains play in constructing the reality we experience and believe in whole-heartedly every moment of our waking lives.

Week 4 Labs:

Graphing a Sensor:

Creative Project – Miniature Lighthouse with 2d Lighthouse Sketch:

—————————————————————————————

Week 5 Readings
Design Meets Disability by Graham Pullin

First impression of this reading: I was very surprised that glasses were government issued until the 1960s.

Thinking about hearing aids as the next disability-correcting fashion piece is difficult. At first, I thought, “But hearing aids have more of a stigma because deafness is a more severe disability than vision impairment, and more importantly, is associated with the degradation of old age.” But then I realized that vision impairment also can be associated with old age, although many who wear corrective lenses are not old. However, it can be said that more people with vision impairment bad enough to require the wearing of corrective lenses are walking around than those with hearing impairment severe enough to require hearing aids. Therefore, glasses started out with a better chance of becoming normative (as they obviously have) than hearing aids.

The discussion about prosthetic limbs is a little different. I think it makes sense that prosthesis wearers might prefer a piece that doesn’t mimic real human limbs too closely – the point about watching people realize a limb is a prosthetic rings very true. In addition, I think amputees are viewed and regarded differently than the hearing impaired; for better or for worse, I think there is a sense of deep sympathy that is expected and generally felt toward amputees or prosthesis wearers that allows them greater freedom of movement between the conventional and the fabulous when choosing a prosthesis type.

I respect Pullin’s point that if more people wore decorative hearing aids, they would become normative much as glasses have. I disagree to a certain extent, though, for the reasons outlined two paragraphs ago. I think it is possible they can become MORE normative, in an avant-garde sort of way, but I’m not sure I can imagine it becoming mainstream to wear noticeable, statement-making hearing aids. Then again, if I were writing in the 60s about glasses, I might be making the same argument.

Week 5 Lab:
Multiple Serial Inputs with Punctuated Communication:

Multiple Serial Inputs with Call-and-Response/Handshaking Method:
I had a problem with this lab – I couldn’t get the upload to Arduino to work. I kept getting “‘sensorValue’ was not declared in this scope” as an error message. I experimented with moving the variable declaration for sensorValue around and didn’t have success. Here is the code I was using:
const int switchPin = 2; // digital input

void setup() {
// configure the serial connection:
Serial.begin(9600);
// configure the digital input:
pinMode(switchPin, INPUT);
establishContact();
}
void loop() {

if (Serial.available() > 0) {
// read the incoming byte:
int inByte =Serial.read();
// read the sensor:
sensorValue =analogRead(analogOne);
// print the results:
Serial.print(sensorValue, DEC);
Serial.print(“,”);

// read the sensor:
sensorValue =analogRead(analogTwo);
// print the results:
Serial.print(sensorValue, DEC);
Serial.print(“,”);

// read the sensor:
sensorValue =digitalRead(digitalOne);
// print the last sensor value with a println() so that
// each set of four readings prints on a line by itself:
Serial.println(sensorValue, DEC);
}
}
// read the sensor:
// int sensorValue = analogRead(A0);
// print the results:
// Serial.print(sensorValue);
// Serial.print(“,”);

// read the sensor:
// sensorValue = analogRead(A1);
// print the results:
// Serial.print(sensorValue);
// Serial.print(“,”);

// read the sensor:
// sensorValue = digitalRead(switchPin);
// print the last reading with a println() so that
// each set of three readings prints on a line by itself:
// Serial.println(sensorValue);
// }

void establishContact() {
while (Serial.available() <= 0) {
Serial.println("hello"); // send a starting message
delay(300);
}
}

I was not able to come to office hours this week because I was out of town for Fall break, and wasn't able to work on this lab until today :( :( I will continue working on it tonight and tomorrow until I get it to work.

**UPDATE 10-24**
I worked on this lab again tonight after class and still had coding errors in both Processing and Arduino. Here is a screenshot of both windows with errors.


————————————–
Week 6 Lab:
Transistor: