Category Archives: Sci-Fi Science

Is Google Glass dead? I don’t think so, and here’s why

Subtitle: A short essay in which Kelsey proves she is an extremophile of Sci-Fi literature

People love to make predictions of success and failure. Star Trek is commonly cited as one of the first tv shows to predict the flip phone, ipad, and smart phone. ‘Back to the Future II’ predicted the hoverboard, Arthur C. Clarke first conceived of GPS, and Ray Bradbuy, in ‘Farenheight 451’ predicted earbuds, giant TVs, and mechanical hounds…and cats*. Compared to all that, digital glasses (a la William Gibson’s ‘Neuromancer’) seems like a slam dunk. Plus, Google is like the Pixar of the tech world. They’ve has had so many wins, how could they fail? Yet, two years and thousands of selfies later, the predicted revolutionary impact of Google Glass now seems to be going the way of the Segway.

Dronte_17th_Century_Segway

Restored 17th C. sketch of Raphus cucullatus by Dronte (Wikimedia)

However, like the Segway, Google has not tapped into the true market for the glass: the primary (raw materials) and secondary (manufacturing) economic sectors. Stay with me here and I promise this leads back to Paleontology. How handy would it be for an inventory screen to pop up in the right upper corner of a person’s vision? Or for a logging company to keep track of where and how they are cutting down trees? You could even keep a record of what each tree looked like before. A surgeon used the Glass to record his procedure, but in the future I can see EMTs sending reports and pictures of the patient to the hospital before they arrive, so the staff are better prepared.

No one would judge a person wearing Glass to inventory or save a person’s life. Instead of trying Glass out in the shower or at a wine bar, we as a society should focus on what technology can contribute to humanity, not how it can enhance a Facebook status.

“Really, Captain, I don’t feel silly at all wearing this…” (ST DS9)

Currently, the University of Texas at Austin’s Non-Vertebrate Paleontology Laboratory (NPL) has a team of volunteers using three Glasses to conduct a cursory inventory of their 3.5 MILLION fossils. Chase, one of the employees at NPL, calculated that it would take him 90 YEARS to catalog all the fossils currently at NPL the “traditional way.” The pictures produced by Glass are sharp enough that they are already being used for reference. It’s a damn good start.

If the 2015 Consumer Electronic Show (CES) is any indication, wearable tech is the way of the future, but a future that must be as useful as it is flashy. Gadgets can’t just be useful for the consumer market, the have to blend into the background of a “normal” life. However, in an industrial or scientific context, normal is shoved out the window in favor of innovative tech, and most importantly, gadgets that make people’s lives easier.

So, there you have it. Google Glass is not a Segway (which, by the way, has been adopted in large manufacturing facilities and by security companies), but a useful tool for the future.

And don’t forget: there are plenty of wrong futuristic predictions as well.

*And let’s not get into how accurate Aldous Huxley’s ‘Brave New World’ turned out to be. I’m just glad that fannypacks aren’t nearly as popular as he predicted.

Google Glass and paleontology collections: keeping a level head

Author: Kelsey

Today we continue on our Google Glass adventure! Check in here and here to see what we’re doing and why.

Confession time: At this point in time I know only a rudimentary amount of programming. This only becomes a problem when I get a brilliant idea (my Morse Code App will exist… one day) or Google Glass doesn’t do something I want it to do. Well, short of taking a crash course in shoddy programming, I decided to create a “physical app” to address my predicament.

The quandary: To take a picture with Glass, you say “Ok, Glass, take a picture”. The computer then beeps happily to you as it takes a ‘screen grab’ of your life. Now, our version of glass did not get the update to aim Glass’ camera before a picture is taken (this is a common complaint among those wandering Google Explorers). So, every time I took a picture, I found I that I tilt my head about 30 degrees to the right.

DrawerLean

Either that or we have a serious problem with our drawers

I’m not sure if this is because all the weight of Glass is on the right, or I naturally incline my head about 30 degrees, but I needed a way to keep my head level as I took pictures. When I was a kid, I used to play with my dad’s level, assessing the stability of my Barbie house and bunk bed (yep, I had a bunk bed/fort/space ship… on reflection I’ve significantly downgraded since then). So I decided to create a device to hold a small level in front of my left eye.

After a quick trip to JoAnne’s, I acquired a teeny level my kid-self would be jealous of. I raided the NPL supply closet and came away with tongue depressors, B-52 (an adhesive), and twisty ties. One hour later, it lived!

PatentedDesign1

Behold! Our product shot

Notice how far out the level is compared to the glass. That’s because Glass uses refraction to make the image appear farther away than it actually is. The level uses good old-fashioned corneal focusing. That day I achieved two goals: I provided the entertainment for the day by strutting around and I also managed to take level pictures.

20140815_113107_357

This represents a beautiful moment in my life

In the end, the device is more of a training tool than a permanent addition. Once I had the feel for what “level” was, I was able to remove the device and still take even shots with Glass. Then again, why would I want to take off such a classy addition?

 Level2

NPL: they tolerate me so well

Special thanks to Angie and Cissy for the photoshoot.

Questions? Comments? Leave them below.

Google Glass and paleontology collections: what we learned, Part II

Author: Kelsey

Recap: One of my projects this summer was testing Google Glass for the Nonvertebrate Paleontology Laboratory (NPL) here at the University of Texas Jackson School of Geosciences. We are interested in testing curation potential. After many rather artistic drawer shots and some casual photograph comparisons, I decided to suit up and get systematic. If you’re just joining us, check out Part I here!

Goals: My primary interest is in data preservation, so I’ll be focusing on the object and text resolution, including degree of pixelation, lighting conditions, and glare. I decided to compare Glass against another small, mobile device: the iPhone5.

IMG_2194

Setting: The set of drawers I decided to photograph houses a set of preserved insects (sadly, no DNA here). The external dimensions of the bottom cabinet are 59.5 x 71.2 x 121 cm (w x l x h). The average drawer width and length is 54 x 66 cm. The only light sources are overhead fluorescent bulbs. The temperature was around 85 degrees F (29.5 C) with a humidity around 60%.

GlassComp_iPhone_05

Not shown: cabinet of extra undergrads

 

Parameters: I am testing the “hands off” potential for Glass, so I only used voice commands and the touchpad when I had to. None of the pictures needed to be “shared,” just saved on the device, so no wi-fi or Bluetooth connection was necessary. I wore the glasses lanyard to prevent slippage (discussed in Part I). For comparison, I used the NPL’s iPhone5. Both tests weretimed and any label I removed from a bag to photograph for one trial, I would have to for the others as well. Both the outside (“Out”) and inside (“In”) were photographed.

Scoring System: The recording device with the the greatest object resolution and text resolution would be tallied for each drawer image. If there was no appreciable difference or both could work just as well, both were tallied for that picture.

TestComparison

 

Results:

Time: Glass 15 minutes 32 seconds, iPhone5 21 minutes 15 seconds

Table

Discussion

There you have it! Both the iPhone and Google Glass have good resolution to record object data (i.e. the fossils are recognizable). The iPhone outperforms Google Glass in text resolution, but Glass only takes about 75% of the time. This is only a pilot study with one trial (I know, but n = 1 sounds like a better and better plan as the temperature rises in the cages!), but it is very important in determining our next step. Mainly, we need a higher-resolution camera in Glass. At this point more information is lost to resolution, lighting, and glare than is made up for by the hands-free Glass experience…for now, anyway.

The pace of not only technology innovation, but technology adoption, is increasing. We could fear change and criticize the hiccups, or we can work to understand these emerging technologies and use them in novel ways no one ever thought possible. Personally, I love a bit of constructive criticism (it’s the only way I stopped being the “know-it-all” kid in high school), but too much negativity only highlights technology’s Orwellian uses. We expect Glass and similar devices will catch up with smartphones in no time. At that point we (NPL) plan to acquire a second Glass.

Our future projects include training volunteers, testing the screen projection capabilities, tagging images, linking images, app programming, and virtual field trips. Living on the bleeding edge definitely has its drawbacks, but this summer has been a fascinating experience and I can’t wait to see what is next.

Stay tuned for the next post in this series: how to deal with the dang drawer tilt.

Questions? Comments? Leave them below.