Archive for October, 2011

3D Printing Objects

Although I have already talked about 3D printing organs I came across this video a few weeks ago and it blew my mind. Also just this weekend I went to a technology conference where they actually had the printers functioning. I got to touch the products, move them, and try them out. It was one of the coolest things that I have had the chance to experience.

Watch this video and you understand:

If your asking your self, “what is the purpose of this?” Think about the possibilities. Companies or high schools can purchase a machine for roughly $40,000 with some stipends for educational purposes.  Students can learn how to use printers like the Z Printer with the possibility of using it in their future jobs. The products that come out from the Z Printer are mostly used for prototypes. But students can make items and sell them to make money for their school. The speaker at the conference said that he heard that one school made pens with the Z Printer, and all they had to do was buy the ink to be inserted in the pen.

Furthermore the video below talks about using a 3D printer in space, so then the astronauts could make any type of tool that they might need. When spacecrafts are launched into air they need to bring everything that they need, but have the weight as minimal as possible. Could you imagine that they could bring even less items, and only make them if they needed them with the printer? Sometimes space launches bring items that they think they will need but in the long run they are never used. This way they can just bring the Z printer  (365 lbs-750 lbs) and only print the items they need when they need it.

According to the ZPrinter website they have some statistics on their different types of machines.

  • 5x-10x faster than all other technologies
  • Output multiple models in hours, not days
  • Build multiple models at the same time
  • Support an entire engineering department or classroom with ease
  • Produce realistic color models without paint
  • Better evaluate the look, feel, and style of product design
  • 3D print text labels, logos, design comments, or images directly onto model
  • Full 24-bit color, just like a 2D printer. Produce millions of distinct color
  • One-fifth the cost of other technologies
  • Finished parts cost $.20 USD per cubic centimeter  in material
  • Unused materials are recycled for the next build, eliminating waste
  • Quiet, safe, odor free
  • Eco-friendly, non-hazardous build material
  • Accurate; part features within +/- 0.008 inches* (+/- 0.2 mm*)
  • Zero liquid waste
  • No physical support structures to remove with dangerous cutting tools or toxic chemical
All and all this type of 3D printer is definitely going to be a large part of our future. Could you imagine having one of these in your house? You literally can make a product on your computer, hit print, and a few hours later you have an assembled movable part.
I hope you now understand how amazing this product is, and enjoyed reading my blog.
Quiz time!!
~Lauren
Advertisements

October 30, 2011 at 9:05 pm Leave a comment

Ultra-Cheap tablets are on the horizon

I have been hearing for over a year how tablet computers will become a device that many people use on a daily basis. When the iPad was introduced in April 2010, people went crazy for it. It sold an unprecedented 3 million devices in less than 3 months. At a minimum of $500 a pop, that is over 1.5 billion in revenue. But after seeing countless people struggle to find a way to assimilate their new ‘toys’ into their lives in a practical way, I bought into the notion that tablets were a fad that provided nothing that a laptop or smart phone couldn’t.

Perhaps the sense of impracticality came from the cost. $500+ is a lot to spend on something with an undefined purpose. Perhaps a tablet with a smaller price tag could provide benefits that outweighed the cost. That is exactly what the $35 Aakash Android tablet does. With an initial release planned for India next month, the Aakash tablet cut back on performance in order to make it affordable.

It will be easy for some to call the Aakash a piece of junk or an extremely cheap knockoff of an iPad, but the fact of the matter is that all technology becomes obsolete eventually. That new, beautiful, and sleek $1,500 MacBook Pro purchased 4 years ago that gave its owner a huge image and ego boost is now embarrassing to take out in public because it looks old and doesn’t behave like a new computer. Everywhere you turn you can see people on their iPhones, iPads, and shiny new MacBooks. But, I digress…

Aakash Tablet

The Aakash comes equipped with the Android 2.2 mobile OS and has a browser called UniSurfer installed which allows webpages to be processed faster to compensate for the slower processor. It also has 3G and Wi-Fi capabilities and a battery life of 3 hours. It comes equipped with two USB ports that allow the user to attach any number of devices such as a keyboard or an external hard drive. Because its main purpose is for educational use, it was designed to handle some additional wear and tear. Most people using the device will have little to no prior experience with computers.

Are there faster, more powerful products out there? Sure. Can any of them match the affordability of the Aakash tablet? Heck no.

Thanks for reading, head over to blackboard to take the extra credit quiz!

-Luke

October 26, 2011 at 11:43 pm Leave a comment

The Future of TouchScreens

In today’s society, touch screens are everywhere.  They are on our phones, tablets, stores and many other devices and places.  The problem is that they are somewhat limited.  Yes, some allow you to use two or three fingers but that is it.  It seems this may change.

Researchers at Carnegie Mellon University have come up with new technology that can tell exactly what has touched the devices screen.  This technology, called TapSense, was developed by Chris Harrison and Julia Schwartz.  They are both Phd students from the Human-Computer Interaction Institute inside Carnegie Mellon.  TapSense uses a microphone attatched to the screen is used to determine what exactly has interacted with the screen.  Instead of just taps, the system is able to distinguish between taps with the tip of a finger, the pad (the part of your finger that is the finger print), the fingernail and even the knuckles.  Basically, instead of just detecting gestures, the software can “listen” to what part of the finger was used and act accordingly.

This adds a whole new dimension to touch screens.  For example, let us say that this could be used with drawing applications or any application that has many menus.  If you touch with your knuckle or your fingernail, maybe it could open different menus.

Right now the software can’t work on smartphones because it requires the extra microphone.  The microphones on current smartphones are optimized for picking up voices, not sounds of a finger tap.  It could still be implemented though, just with the addition of the extra mic.

Here is a video demoing TapSense:

To read more about from the article this blog post is based on, click HERE.  Also, remember to take the BlackBoard quiz!

Thanks!
-Mike

October 24, 2011 at 9:30 am Leave a comment

The Living Picture

Recently, a company called Lytro released a radical new camera design. If successful, this could change the entire landscape of photography as we know it. The advances are due to a field called light-field technology.

Even the shape of the camera is vastly different from the typical camera consumers are used to. As shown in the photo below it is a 4.4 inch elongated rectangle that comes in “electric blue”, “graphite” and “red hot.” Depending on the model, the camera hold anywhere from 350 – 750 shots.  Compared to many of the industries cameras, the Lytro camera is extremely simple. It contains a power and shutter button, a USB port, and touch-sensitive zoom range feature.

The real revolution comes in its photo taking abilities. Conventional cameras use a variety of lenses to focus on a single subject. When the subject is correctly identified photographers produce amazing photos, but the areas not in focus will forever remain blurry. To produce these types of photos, traditional cameras only capture light from one direction which reacts with the cameras sensor. In contrast, light-field technology captures light in multiple directions which reacts with each area of the sensor.
Rather than a traditional image, Lytro produces a 3D map. The map can be forever altered to focus on various parts of the photo. The dynamic photos can be hosted on Lytro’s web site, embedded in Facebook pages, or edited with a Mac application.

The advantage this gives to photographers is unmatched by traditional means of photography. Rather than waiting for images to come into focus, photographers can simply snap a shot and worry about focusing on the perfect moment later. Also, the camera is much quicker because the shutter is instant. Traditional cameras auto-focus wasting away precious moments, but this will not be an issue with the Lytro. The photo is taken almost instantaneously, because the focusing occurs after the photo is taken.

The applications of this new camera design are limitless, and it will be interesting to see which types of customers pick it up first.

Included below is a short interview with the founder of the company, and check out the following website to learn more.

Interview with Lytro Founder

CNET

Lytro

Thanks for reading!

– Paul

October 20, 2011 at 6:20 am Leave a comment

3D Modeling for Crime Scenes

When people think of 3D modeling most would think of using it for movies and TV or for the production of a product. I have done some 3D modeling at ISU and I was interested in learning more about it. After some research I found an article that discussed how professional modelers are being used to help convict murders.

After there is a crime scene and pictures are taken, they use the images to form an entire crime scene which forensics can then use to solve the murder.  For example here is a video of a 3D model of the assassination of JFK. Though you might think this is a poor example of using 3D modeling because we are not exactly sure who murdered JFK, it has helped us find more information about what happened that day.

Furthermore if anyone has ever seen the TV show Dexter they would understand this topic more. Dexter is a blood spatter analysis. He uses string attached from a focal point on the splatters of blood to assume where the murderer was standing. As you can watch in this video the techniques they use to see the different types of patterns that weapons leave, and the string analysis that I talked about earlier at the end of the clip.

The spatter analysis can take this to a professional 3D modeler who can bring it into the computer to see it in better detail. According to the article “The first step is to use a laser scanner to make a 3D digital map of every object in the crime scene.The team also uses a digital camera to capture the shape of bloodstains. They then use another laser ranging device called a tachymeter to obtain a precise location for each blood spot in the 3D model.

Next, they calculate the mass of each drop from the size of its stain, and use this to calculate its maximum potential velocity – air drag would rip apart a droplet if it travelled faster than this limit. With that information, and an angle of impact estimated from the shape of the stain, their software projects a realistic trajectory backwards in time to locate the origin of the blood spatter. The 3D results give us good clues about the area of origin, the number of blows, the positioning of the victim and the sequence of events. The system has already helped in two murder inquiries, revealing in one that a woman killed by her husband was lying in bed rather than sitting up when attacked.”

You can read this entire article here and see some more pictures.

If something like this interests you, you don’t need to be a forensics investigator! And you can take some 3d modeling classes here at ISU.

Hope you enjoyed the article. Go take your extra credit quiz!

~Lauren

 

 

October 16, 2011 at 11:39 am Leave a comment

Speech Therapy Aided Through Use of Palatometer

The movement of the tongue plays a big part in the articulation of words and basic sounds. It can be difficult for speech therapists to tell how a deaf or speech-impaired person is moving their tongue without the aide of newer technology. The palatometer is a device, sort of like a retainer, that records the movement of your tongue.

The palatometer comes equipped with 118 pressure sensors to capture the subject’s tongue movement. This data can be used in multiple ways. In one of the ways, the movement data that is captured is processed by a computer that is able to determine with over 94% accuracy what word the user is trying to say and outputs a voice. One pressing issue with this technology is the time it takes to calculate the correct word. Although the process is completed in a fraction of a second, the delay between the subject’s mouth moving and the audience hearing the voice is enough to give the appearance of watching an out of sync video on YouTube.

The more practical use of this technology is in studying the movement of the subject’s tongue using specialized computer software that shows two simulated mouths. One mouth is the mouth of the subject and the other is the mouth of a speech therapist pronouncing words, phrases, and sounds. The software gives instantaneous visual feedback from the mouths of the individuals so that the subject can try and imitate the movements of the speaker. You can read more Here

You can see how well this technology has worked in a specific case by watching the following video:

—-go take the quiz!

-Luke

October 13, 2011 at 1:22 am Leave a comment

Romo, The Smartphone Robot

Majority of people nowadays have a smartphone.  Whether its an Android powered phone, or an iPhone, almost everyone has one.  We can listen to music on them, watch videos, tweet, check facebook or do just about anything.  What if you could make a robot out of your smartphone?  Now you can!

Two guys from from Seattle, Peter Seid and Phu Nyuyen have created a new type of hardware for your smartphone.  They created something they call the Romo.  The romo is a simple and affordable physical robot that you attach your smartphone too.  It then plugs into your audio jack.  The robot uses basic analog electronics to trigger two motors via sounds transmitted from your phone and through the audio jack.

When the robot is actually made available to buy to the public, it will also come with 3 apps for iOS and Android.  The first app, called RomoRemote, lets you control the robot from another smartphone over Wifi.  The app will even have a live view, by using the phones camera on the one connected to the robot.  The second app, called Romo Kart is a mixed reality version of the famous Mario Kart which even includes “digital attacks” that let you “attack” other Romo robots.  The third app will be the app you run while it is connected to the Robo robot.

The Romo app you run while it is connected to the Robo robot will also have Drag And Drop programming modules, so you can make the robot do tasks by doing simple Drag and Drop programming.  This looks like a pretty exciting new technology!  I think it would be some fun to race around a couple of those with my friends and play some Mario Kart! Hope you guys enjoyed the blog post.  You can view the original article here and watch a video about the project here if you want to learn more.  Remember to do the BlackBoard quiz!

Thanks!

-Mike

October 10, 2011 at 9:30 am Leave a comment

Older Posts


October 2011
S M T W T F S
« Sep   Nov »
 1
2345678
9101112131415
16171819202122
23242526272829
3031