Real-time Embedded Linux and POSIX RTOSs For Microcontrollers (MCUs)

Friday, August 31, 2007

Autonomous Robots Starting To Become Real

A recent post related to portable supercomputers shows just how easy and inexpensive it is becoming to produce high performance computers capable of doing advanced analysis on their environment using smart sensors.

If you can build a machine so inexpensively ($2470) which can have twice the power of deep blue and be a checkable airline bag, we are definitely close to a breakthrough.

According to James Albus (of NIST) another two orders of magnitude should do it and service robots should be possible.

Coming to your home sometime soon....

Monday, August 20, 2007

And now the floodgates start to open... Tilera Ships

Hey;

Finally the new Sun chip has some competition. There is a good link dealing with this product here.

I am a bit surprised that the chip doesn't just have memory, not L2 if it is intended for embedded applications. I think the real story is that it is intended to be more general purpose but the embedded market for H.264 and other applications seemed a bit easier to penetrate near term.

In the future expect to see chips like this optimized for embedded applications.

Tuesday, August 14, 2007

Microchip dsPIC and DSPnano Offer Ultimate Integration

The Masters was great. I must say that meeting the CEO Steve was a real highlight. He is so practical and down to earth. I'm sure that he has had much to do with the great performance that they have seen over many years.

The DSPnano for dsPIC product is highly complementary to the offerings that they have created at Microchip. Their focus has been tiny I/O modules and debug tools. Our focus is open source DSP RTOS tools which offer POSIX compliance, DSP libraries and next generation development environments. As it turns out, we offer the glue that it takes to exploit many smaller components for dsPIC and build a total solution.

For example, voice processing, SPI, Lin, USB, TCP, I2C and much more can be quickly and easily integrated into the RTOS with the POSIX interface and all calls can be standardized. The development can use the latest Eclipse technology and debugging and display of target data levers off MPLAB, ICD2 and REAL ICE technologies.

The most amazing thing is that you can now cut your design time substantially. This combination along with 3-5 week lead times for parts accelerates your growth and lets you improve faster than your competition. Just ask Steve or read his book - this is a winning formula.

Thursday, August 9, 2007

Microchip dsPIC Expectations High

At the Master's 2007 its clear that Microchip is continuing to execute on its 16 bit strategy for micro-controllers and digital signal controllers. The number of applications is staggering for me and the chip volumes that are being shipped as well as the revenue generation from this company is staggering.

It was only just over fourteen years ago that they were on the brink of insolvency. Since this time they have transformed their business to dominate the low end micro-controller business worldwide. They are now number one in both volume and dollars.

Four years ago Microchip added the 16 bit line of products and they are doing as well or better from their introduction as the original PIC. The scenario is exciting because they are delivering superior price performance at the lower end and dominating first on volume and then on overall revenue. Congratulations to all those hard working people at Microchip!

The fallout from this for multicore is a bit obscure, but I don't see why they aren't considering using multiple dsPICs in a single core for very low power higher performance applications. They are great engineers so look for this in a couple of years.

Wednesday, August 1, 2007

Emotional Memory For Intelligent Machines

As we try and build more and more intelligent machines it seems that their is many lessons to be learned from the human brain about how to avoid dangerous situations and how to use learning as a basis of computer memory models.

Today, our idea of an intelligent machine that can understand situations from the scenes that it sees is to do something like the following:
a) Create a taxonomy of all the objects expected in an environment
b) Create relationships between the objects and have some idea of purpose and function associated with objects.
c) Look at a scene with sensors of various kinds and correlate 2D representations of 3D models to develop a list of related scene objects from the taxonomy.
d) From the structure, understand the scene.
e) Modify the scene, update the structure and update the understanding.
f) All object matching, relationship establishment, understanding and so on is done from taxonomy information saved in a relational database and recovered by search mechanisms related to relational tuples in the database.

If we have Stanley (Stanford's robot Toureg) driving down a desert road without too much around where it has weighpoints along the way and maps for terrain, this kind of sort of works. If you miss a weighpoint (CMU's red team did this), big trouble may follow because the system isn't really intelligent. Stanley didn't have the same weakness because it was smarter but still had many limitations.

Now consider how the human brain solves the same problem. It is a very different scenario.
a) The brain learns from birth, building up its knowledge of the surroundings.
b) Strong emotional response related to danger, pain, happiness etc keeps training the brain to remember certain scenes and experiences.
c) Discussion of events reinforces these event memories.
d) High level extraction of abstract concepts are related to these strong memories which are tied to the detailed memories, allowing further refinement and evolution of the concepts build upon these memories.
e) The strength of the emotion at the time creates a window by which to filter memory response time.

Would it not be relatively easy to add emotional memory to the computer system to improve response? Then the system could use strong emotional memories to respond quickly to critical events and take more time when events are not so critical.

The cost of this extra processing would be the cost of creating an emotional measure for each scene as it changes over time. This could be done in many ways but must be related to the variable score over a broad set of emotional words for a given language. But why stop at just emotional words? Shouldn't we take a scene and create a scene understanding dialog rating the scene on all means of words related to the scene and use this as a key for identifying all future scenes?

Imagine leaning and feeling happy about what you're learning. The subject matter abstract concepts, emotional feelings and the fact that your basic activity is learning are all keys to finding similar scenes. Emotional response could be a quick first pass but all words with a meansure of strength could be good measures of the correlation between this scene and others with similar characteristics.

Has anyone seen any research in this area?