Gloo - The Robotic Pet

GitHub: Gloo - the animatronic pet

Overview

Gloo was originally inspired by Mira, created by Alonso Martines. I had the honor of meeting him in person and listening to a great inspirational talk during the 2015 MESA Student Leadership Conference in Santa Clara, CA. Alonso reminded me to constant keep learning and to not forget the importance of the looks of a final product; thus, I focused mostly on the behavior and details on the appearance of the robot. Currently, the robot on the video above is on a functional level, but you can tell from the facial expressions that Gloo has a simple but elegant solution to express his feelings.

Materials

Other Materials

Development

Raspberry Pi and OpenCV

I have quite some experience with a wide variety of peripherals and electronic sensors and actuators, so I decided to tackle the aspects of the project I was less familiar with, first. Using a Raspberry Pi B+ with Raspbian Jessie, Python 2.7, and OpenCV 3.0, I started looking at different sample code involving facial recognition and webcam input image capture. I've had previous experience with basic Python 2.7 and using OpenCV on Processing, independently, so this was my first time running OpenCV on Python 2.7. I set up an environment on my laptop to test and debug scripts since my laptop is faster than the Raspberry Pi B+. Setting up all packages wasn't a big issue on my laptop, but compiling OpenCV and installing all packages necessary on the Raspberry Pi B+ took hours.

Once everything was set up, I wrote a short Python script for facial recognition from a webcam. The next step was identifying a solution to transmit this data from the Raspberry Pi to an MCU, in my case, I chose a Teensy 3.1 because of its speed, low cost, and Arduino IDE compatibility.

First, I selected to use SPI, but soon regretted this decision. I spent hours looking for an SPI Python package for the Raspberry Pi with fairly good documentation; however, I couldn't find any. The best package I was able to find was from Adafruit, but they didn't provide documentation on how to use it. I understand SPI and are familiar with the protocol, so I was able to figure it out from their different example scripts and the package's source code. I used my Saleae logic analyzer to test whether my implementation was working. I wrote a Python script to transmit the x and y coordinates of faces recognized by the webcam. All the same, when I decided to connected to the Teensy 3.1, I realized that both systems were acting as Masters and soon learned that Slave implementation was not available for the Raspberry Pi and I couldn't find much documentation for Arduino online. I looked into I2C for a very short time and ended up with the same conclusion.

UART was my last resort. The Teensy 3.1 has 3 hardware UART peripherals, which is great for this situations. I also overlooked that the Raspberry Pi can be operated entirely from the UART pins, if wanted. I simply wrote the Python script to output to console since the Teensy 3.1 was going to be monitoring the system output regardless.

Testing Each Peripheral

I wired the hobby micro servo motors, SG90, and a joystick to test the pan/tilt configuration I had. The joystick position was easily acquired from the ADCs, and the servos were controlled using the Servo.h library on Arduino.

I originally intended to use the TFT display to show high resolution images of a face, including eyes, eyebrows, nose, and mouse, but I soon discovered that, in spite of the 72 MHz that I had the Teensy 3.1 running at, the refresh rate of the screen wasn't fast enough to make seamless transitions. I first tested the setup with example code provided by the Adafruit ILI9341.h Arduino library. Then, I attempted loading simple images stored on arrays, generated from bmp files. Reading and writing from arrays was very slow, and the images were very low quality. I decided to try adding an micro SD card reader to load better images by increasing the memory storage size and accessibility. I first tried the example code provided for this interface, the TFT display and the card reader used SPI, so they shared several pins. I noticed that the refresh rate loading images was still pretty low, so I decided not to use high resolution images and fall back to simple shapes.

The RGB led was driven by PWM-enabled pins and current-limiting resistors.

All Together

I used state machines to ease the design of the multiple systems and concurrent operations the Teensy 3.1 had to run.

My major issue was acquiring reliable images from the webcam with the pan/tilt system vibrating and moving so much. I had to make sure the state machines for controlling the servos and acquiring input were aware of each other's operations. I could have placed both operations on a single state machine, but I designed the system so it would be easy to add other factors that affect the behavior of the servos and the input of the webcam could be used to control other systems, too.

I spent a lot of time drawing the faces and expresssions for the TFT display and the animations between this expresssions.

Normally, running a Python script at startup on a Raspberry Pi should be simple to set up, but I installed all the packages on a virtual environment to allow for other potential settings. The virtual environment required more steps for initializing the Python script but, after hours of attempting different methods and reading solutions online, I wasn't able to implement it. Instead, I took advantage of the UART connection between the Teensy 3.1 and the Raspberry Pi to send any required initiation commands from the Teensy 3.1.

Twitter's 140-sweet-characters

Blog/Quick Updates

  • Sep 14, 2014