Wednesday, June 5, 2013

Healthcare robots

Yesterday I attended a talk by our new post-doc, Osamu Sugiyama. He's worked with the tiny Robovie MR-2 at ATR:


The robot could be used as a healthcare robot, he said. For example, when patients visit the doctor, explanations can get long and complicated. A bit embarrassed, patients just nod and pretend to understand instead of asking questions. So he proposed a healthcare robot that patients would be comfortable asking many questions to.

Also, even if the patient understood instructions at the doctor's office, they could forget once at home. A healthcare robot could remind them to make sure that they continue to follow doctor's orders.

Tuesday, April 16, 2013

Articles on Cross-modal Emotions




Music and movement share a dynamic structure that supports universal expressions of emotion

http://intl.pnas.org/content/110/1/70
Basic emotions expressed through music and movement are cross-cultural

Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations
http://www.pnas.org/content/early/2010/01/11/0908239106
Basic emotions expressed through voice is cross-cultural


Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech

http://rd.springer.com/chapter/10.1007%2F978-3-540-85099-1_8

Book: Affect and Emotion in Human-Computer Interaction
http://rd.springer.com/book/10.1007/978-3-540-85099-1/page/1

Articles on Emotions in Child Development

Lots of new information from Child Development journals:

Preschoolers Use Emotion in Speech to Learn New Words
http://onlinelibrary.wiley.com/doi/10.1111/cdev.12074/abstract
Kids were able to recall only when negative affect was used?

Longitudinal Relations Among Language Skills, Anger Expression, and Regulatory Strategies in Early Childhood
http://onlinelibrary.wiley.com/doi/10.1111/cdev.12027/full
Better language skills made kids less prone to expressing anger? Perhaps because they could express their feelings using words instead of other expressions (face, tantrum, etc?)

Friday, February 17, 2012

Robots using animation principles

Here are some ideas from a conversation with an animator from Dreamworks about making robots more realistic:

* when the eyes close, the eyeballs typically look down
* we blink every time we turn our head or look somewhere else
* one idea is to use muscle wires to animate a robot face

Monday, July 18, 2011

Nona installation

1. Download the Hugin bundle from http://sourceforge.net/projects/hugin/
2. Copy the Hugin application to the Applications folder
3. Copy the initialize_environment.txt to your home folder, like /Users/angelica
4. From your home folder, run
source ./initialize_environment.txt

Robot accompanist


To do's
-       test with .wav file, sound not loud enough
-       have to make media location correct

Tuesday, July 12, 2011

Chapter 1 - Why should we make humanoid robots?

Translations from "Why I Make Humanoid Robots" by Hiroshi Ishiguro 

From Computer Vision to Robot Research

For 10 years, I've been completely absorbed in robot research and development, but in the beginning, I studied computer vision. Some of it had to do with writing Prolog, but researching computer vision  meant mainly analyzing computer images from a camera, so the computer could recognize what was in the photo.

Upon digging deeper into computer vision, this question sprang to mind: "Can a computer recognize reality without a body?"

For a computer to recognize an image, it needs to be loaded with knowledge about the contents of that image. But how much knowledge should we store? For example, to recognize a chair, we'd have to teach the computer every type of chair in the world. How do humans accomplish this kind of thing?

Based on our own human experience, we recognize that a chair is basically something we can sit on. Even if it's the first time seeing a particular chair, we can recognize that it's a chair.

That is, humans can recognize objects using their bodies, and don't need to comprehend that shape in the photo.

We observe what features we need to recognize whether we can sit on it or not, and find it possible to have a kind of generalized recognition.

For computers to have an equivalent recognition ability, they should be able to move around in their environment, just like humans, and have a body that can touch things.

That's the reason I widened my research from the world of computer vision to the world of robotics.

Research on making robots with human-like vision
After that experience in computer vision, the first thing I tackled in robot research was giving them a human-like sense of sight. That research could be classified into two categories: omni-directional vision and active vision.

In humans, two types of eye movements happen. The first kind is called omni-directional eye movement, used to survey our environment: we first recognize where we are, then figure out how to get there. The second type is continually looking at an object of interest to examine it in detail. This is called active vision. We need this research in eye movements for robots to play an active role in our daily lives.