Living in the Future with 3D Printing

Coffee To-Go Cup in Blender

I remember the very first time I saw a computer-generated image that actually looked awesome. It was in Scientific American, I think, probably in the late 1980’s, and it was a scene with a simple sphere and a single light source.  The article discussed the “ray-tracing” software, a mind-blowing concept at the time. I think it took like a million hours to render the image–and it was AMAZING.

Fast-forward to the future, which we now live in. My son and I have been taking a class at Madison College to learn Blender, which is an incredibly robust open source 3D design software. We are going to make some projects to send off to be 3D printed at Shapeways, but Blender can also be used for animation and all kinds of 3D projects.

Look, I made a coffee cup! I love living in the future!

Advertisements

Um, is it 2015, or 1984? Watch out for Facecrimes

Actual quote from 1984 by George Orwell that we were listening to in the car the other day

“He did not know how long she had been looking at him, but perhaps for as much as five minutes, and it was possible that his features had not been perfectly under control. It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself — anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offence. There was even a word for it in Newspeak: FACECRIME, it was called.”

Actual quote from Scientific American publication that arrived at my house later that same day….

The Big Screen: High-tech security on the ground keeps passengers safe in the air

“Anyone showing up at the airport with bad intentions is unlikely to comport himself like he’s spending the day at the beach. At the minimum, he’ll be furtive and evince some measure of edginess—even if subtle and subconscious. And this is where California-based Eyeris’s micro-expression recognition software, EmoVu, comes in.

Interfaced with a color or 3-D time-of-flight camera overlooking an airport terminal, EmoVu’s self-learning algorithms register even the subtlest facial cues—lasting just 1/20th of a second—that correspond to a suite of universal human emotions: joy, surprise, anger, sadness, fear, disgust, neutral. To put this into perspective, the human eye only recognizes macro-expressions, such as a smile or grimace, which usually last just 0.5 to 4 seconds. Such performance is largely determined by how many frames per second (fps) a visual device can capture, with the human eye topping out at about 5 fps, and EmoVu processing images at 140 fps. In addition, the human brain needs several seconds to register what the eye is seeing, but EmoVu’s “brain” does so almost instantaneously.

In practice, EmoVu functions like so: 12:05 pm, three people in field of view, 1 male—distressed, 2 females—happy. If programmed accordingly, it can then dispatch a security alert when certain affective criteria are crossed.

Eyeris’s CEO, Modar (JR) Alaoui, claims that his software consistently boasts an astounding 96.8% accuracy on account of its self-correcting and deep-learning artificial intelligence, which makes it more accurate at interpreting facial cues as it collects more data. In other words, the longer the technology operates, the more formidable it becomes.

(http://sciamfot.com/the-big-screen/)

Coincidence? I think not…

Using baby griffin to avoid facecrime

Best way to avoid committing a Facecrime? Wear steampunk goggles and a baby griffin, and look distractedly off into the distance…(at Convergence 2015–yay, SF conventions!)