Thursday, January 28, 2010

Will the new Bill Gates please stand up?

Circuit Cellar together with Texas Instruments have launched a design contest around the Stellaris LM3S9B96 microcontroller. This 100 pin controller is based on a 32-bit ARM Cortex–M3 core with 256 KB flash memory, 96 KB RAM and it sports many interesting features like CAN, USB 2.0 OTG/Host/Device, 10/100 Ethernet MAC and PHY, I2C, I2S, SSI, UART, PWM, ADC and some other peripherals. Although this is more or less what you would expect from a modern 32-bit microcontroller, it is not all: the device also has a built-in library with almost 400 functions to access its peripherals. This BIOS (Basic Input Output System) provides a nice hardware abstraction layer for almost all of the registers and makes life of the programmer much easier.

To top thing off, the device also has a built in real time operating system (RTOS)! According to the datasheet the controller has a copy of SafeRTOS inside, a secure version of FreeRTOS edited by Wittenstein. Unfortunately, the datasheet isn’t very verbose about it, but it is there.

Now what do you call a processor with peripherals, a BIOS and an operating system? A computer! Indeed, this microcontroller is pretty much like a computer on a chip (CoC) and the only thing missing is a graphical interface, but that is probably just a matter of time and/or pin count.

Are we entering a new era of microcontrollers? Will this be the new standard architecture for the next generation of microcontrollers? What about code portability? Will the user get access to the source code of the built-in BIOS and RTOS? Will ARM, TI & Wittenstein be the next Intel, AMD and Microsoft?

At the end of 2008 I assisted at an ARM conference in Paris. The buzz word then was "code portability". If only everybody would be using ARM-based processors, then code would be easily portable from one device to another. Luminary, the developper and former owner of the Stellaris processors, was present too. But is the BIOS & RTOS they now put in their devices good for code portability? Will other ARM-based microcontroller manufacturers too integrate a compatible BIOS & RTOS in their devices or provide compatible libraries? Or are we going to have to deal with tens of different BIOS-es and RTOS-es in the future, supported by code bloating tests to figure out what the heck the platform actually used is capable off?

Devices will get bigger and will integrate more and more. Within a couple of years they will be as powerfull as a modern PC is now, with built-in BIOS and OS. This all smells so much of Microsoft and the OS wars from the past years.

NOOOOOOOoooo!!!!!

Please, not again!

2 comments:

  1. Luminary Micro (now TI) distribute an application note on using SafeRTOS from ROM with their StellarisWare driver library, but the best way to learn is using the example projects. The CD that comes with the development kit contains an example using the Keil tools (the official competition compiler), WITTENSTEIN (http://www.SafeRTOS.com) provide further examples using the IAR and GCC compilers.

    With regard to obtaining source code, the source code of the driver library is available from Luminary Micro. The source code of SafeRTOS is not available, however the source code of FreeRTOS is.

    Finally referring to portability. CMSIS (Cortex M Software Interface Standard) provides some very low level portability, although personally I'm not a fan of this. SafeRTOS uses a higher level API so is portable across different architectures if you have the relevant ports - just like any other RTOS kernel. FreeRTOS runs on 23 different architectures providing the ultimate in portability ;o) I'm bias though!

    ReplyDelete
  2. Bloated microcontrollers, is that our future? Running a (tiny) application through a RTOS, BIOS and APIs as abstraction layers means more demand for Flash, RAM and Mips. It means more sales, more dollars, at your expense. Why would you go this way? The pathetic answer is: because of the utopy of code portability.

    You'll try it, you'll hate it, but you'll use it. At the end of the day you'll get a uC system that you barely can control that you get forced to recompile each time there is a API specification change. Why? Because your customer won't tolerate that you don't run the latest API version. Silicon vendors, indeed, using marketing gimmicks, are going to manipulate your customers in this direction. Call it a nightmare. Are embedded electronics basing on wrong paradigms, nowadays? Is there another, safer way?

    What we actually need in uC are:

    1) less pins,
    2) more built-in standard peripherals like 4 x I2S, 6 x Enhanced Serial, 4 x I2C, 8 x SPI, 4 x CAN, 16 x PWM, 4 x LVDS camera, multi-touch, ...
    3) a built-in FPU as co-processor (optional)
    4) a built-in DSP56K as co-processor (optional)
    5) a built-in hardware TCP/IP stack (optional)
    6) a built-in audio video encoder and decoder for audio MP3 and video MP4.H264 (optional)
    7) fully assignable pins (this is how you end-up with less physical pins)
    8) a double video backpane in RAM up to 480x272x24 bits on the CPU die (extensive DMA)
    9) a basic 2D video accelerator and controller outputting in LVDS (no parallel interface)
    10) 400Mhz CPU clock speed with refined cache management
    11) no linux-compliant MMU
    12) no RAM no Flash expansion bus
    13) decent RAM size like between 1MB and 8 MB with a vestigal MMU scheme
    14) decent Flash size like between 512K and 4MB
    15) about 10K uncommited gates usuable as FPGA as internal co-processor(s) or as external custom peripheral(s).

    It may look completely mad for a tiny uC, but this is only a downscale from what nVidia is already delivering with the Tegra processor!

    If we need code portability, let us use Java. For critical sections that need execution speed, pre-compiled Java code can be efficient. Pre-compilation may be done on-chip, just after booting. A whole embedded telco application and/or a whole GUI may be programmed and maintained this way. Do you like it?

    Physical device measuring about 5 x 5mm, available in 64 or 100 pin. Internally, not a monolithic device, but a stack of 4 inexpensive high yield specialized dies: CPU, FPGA, RAM, FLASH. Easy to manage internal interconnecting buses, minimizing pin count, with interconnecting matrix. Low performance penalty. Individual dies produced using obsolete 90 nanometer factories.

    Imagine the production cost of such 1MB to 8MB RAM die, or of such 512K to 4MB Flash! Same for the FPGA die: a 10K gate size is very small compared with what get sold as as individual chips.

    The same FPGA, RAM and FLASH dies to be used for many different uC architectures like ARM, MIPS, SPARC, PIC32, R32C, ... Huge volumes. Scale economies. What's important for reaching scale economies is to agree on a common interface.

    Many silicon and uC vendors have been trading their ARM-based uC at a loss, with selling prices well below the 1 dollar barrier. That's incredible. It can't continue like this. Silicon and uC vendors need to concentrate on CPU and peripheral architectures without bothering about FPGA, RAM and FLASH technology. More focused R&D is desirable. This may be the key for regaining with profits.

    We'll see the emergence of a new uISA (Microcontroller Industry Standard Architecture) in the embedded uC world. Not expressed on the board, but internally expressed on the die stack. It is only when approaching this stage, with the emergence of a new uISA, that APIs will get some stability. That's very possible. We may be very happy with this, isn’t?

    ReplyDelete