Innovation Tales

It’s difficult to be relevant in the middle of 2020 without having wisdom to offer in regard to dealing with a pandemic or easing racial tensions while addressing social injustice.  Not to mention dealing with environmental challenges, addressing education and economic fairness, and finding the right leadership to manage the needed processes.  While I have my own ideas about most, if not all, of these topics, I’ll keep them to myself for the time being.  So instead of being topical, I’m telling stories about the good old days when we were changing the world.  Maybe something like that will help when we “re-open”.

What is working for us through all of this is getting new ideas out in the open and finding ways to implement them.  We are a remarkably adaptable species even if we are responding problems of our own making.  Innovation is not just about ideas.  In fact, a lot of “ideas” are just escaping brain gas.  Impactful innovation needs to be thought through, experimented with, proven, and applied.  This essay is a collection of stories sharing my experiences with innovation. 

Writing a utility patent can be a good test for a useful idea since the Patent Examiner will only approve what can be proven to be “useful” and “original”.   In academia, there is often competition in citations for research papers or inventorship of patents, and my ego must have picked up on that.  Perhaps I would be a great inventor like Thomas Edison, who had a sizable ego and enjoyed the credit but also acknowledged standing on the shoulders of others, such as in having several researchers helping run the dozens of light bulb experiments.  I am named inventor on 26 patents from my Intel days and on 2 more of my own since.  I invented the Flash File System which was my most important contribution but ended up being disappointed in not being named the sole inventor.  What I didn’t understand at the time was that a patent needs only to improve, to any degree, upon current practice.  That is the nature of all great innovation, building upon what is known.  Teamwork: using all the available good minds to help.  It is a fantastic time to be alive in this regard with the ability to search and study any number of interests and share your ideas instantaneously.   The following “tales” are from the times we were creating cool new stuff very rapidly at Intel, my experiences with innovation  – – the kind of experiences I’d hope anyone can get a taste of when the opportunities arise.

Today “solid state disk” (SSD) flash drives are found in every imaginable kind of mobile device and we take its suitability for storage for granted.  In 1990, the year of my patent, Intel began production on the first viable flash memory, a very low-cost single-transistor cell technology subsequently labeled NOR Flash due to its logical characteristics in an array.  It was not ideally suited to frequently changing data storage, was better for software code as a rewritable ROM memory to replace its predecessor, the EPROM.   Toshiba was working on the better candidate for SSDs, the NAND implementation, but it would take several years for the reliability and cost structure to reach parity with NOR.  Several interesting stories describe the meandering path to get to where we are now.

We could imagine a future without heavier, less reliable magnetic disk drives, but the world wasn’t ready to change their computer architectural model with industry steeped in knowledge of how disk drives work.  The problem for flash was that it cannot be overwritten in small “sectors” the way data are stored on disks.  The flash drive I envisioned was not encumbered with higher costs of other types of memory (the approach upon which our manager insisted).  My goal was to store all the data structures in flash after each usage session to deliver what I thought customers wanted:  highest reliability at lowest possible cost.

Other teams also worked on concepts for storing disk-sized data date sectors within the much larger flash memory “erase blocks” but by using expensive EEPROM or similarly expensive SRAM plus battery for directory structures.   I wrote up my “pure-flash” file system invention in a draft patent, which was then “expanded” to include the other implementations.   This never would have been done had management understood how patents work (apparently thinking we’d get a “three-for-the-price-of one” in regard to attorney fees).    However, a patent must describe a single invention, so that conglomerate patent had to be split back up into separate patents, the other two of which were never completed and approved.  Alas, there are other names on the patent beside mine, but so what?  Patents are not about bolstering the egos of inventors, they are about describing useful inventions and making those public.  Well, patents have become a whole lot of other undesirable things, but that’s another discussion.   This was a good example of how not to foster innovation, having people work out specific implementations (especially those directed by managers) instead of focusing on customer value, utility, and cost.

For the purposes of innovation, my patent got the ball rolling, but that specific shape of ball didn’t roll very far since it was never implemented as written.  The value of the innovation was in basic principles that stood the test of time.  There have been numerous patents on solid state disk systems where more recent patents (mid to late 2000’s) where patent holders tried to enforce claims to obstruct their competitors.  It was very rewarding to serve as a fact witness in a couple such cases showing that “prior art” existed in the form of my patent – – I had already come up with those same ideas.  The patent system attempts to vet new ideas for originality but cannot always do so.   A “basic” patent is one considered foundational, identifiable by how often it is cited in subsequent patents: the basic ideas of my invention were in fact basic.

I was a decent system architect and had good experience in microcontroller programming, but I sure didn’t know database architecture and software.  My directory structure was 100% flash stored but it was convoluted and complex. Good people from Intel’s systems labs were asked to help, and they generated workable ideas for a flash directory along the lines of what was eventually used.  Nice to be named on that patent for my bit part, but that collaboration transformed my idea into something implementable.

We continually sought Microsoft’s help to build a flash file system, telling them how big the business would be, but it didn’t represent any near-term business to Microsoft.  They were working on the NTFS file system and proposed such a “mountable” file system unique to flash.   A completely new file system tailored to the new medium was the only way to meet all the requirements I had laid out, but older versions of the Windows OS didn’t handle installable file systems well, another reason why flash drives had to “act like disks.”

We made the efforts of assembling the flash into small packages which could be used to make “PCMCIA” (Personal Computer Memory Card International Association) cards.  The near-term opportunity for Intel was much larger than Flash Cards, capabilities which the other product groups wanted to exploit, building cards for external options such as modem and Ethernet LAN connectivity.  I managed the efforts to compile a new specification called ExCA (Exchangeable Card Architecture) which added I/O to the basic PCMCIA bus.  I chaired the meetings and resolved Word Doc edits while this group came up with the necessary pieces to make it all work.  The challenge was immense in the old DOS PC days with limited IO’s, interrupts, and system memory, most of which were already configured in hardware within the PC.  The ExCA software had to be able to query system to figure out what resources were available in the system and assign them to the PCMCIA Card and the notebook PC’s ExCA  socket controller, repeating that process if another card were inserted in a second socket.  Then it released those resources when the card is removed.  It was a large cross-organizational effort to design and build the socket controller, the cards (LAN, modem, flash) and write all the software to make it work in mobile PCs where these were to be employed.  But we weren’t in the software business, so we gave it to Microsoft to integrate and distribute.  This software was deployed as the first “Plug and Play” in Windows95.   Those platforms will never be remembered fondly due to numerous problems with Win95, but it was amazing to get it all to work, especially if you know anything about DOS/ISA bus PC systems.  Over time the ISA bus evolved to EISA, which was gradually replaced by PCI. Windows software also broke out of DOS IO and 640KB memory constraints.  Accordingly, PCMIA moved to “Cardbus”, a PCI implementation, so Plug and Play evolved into a capability that worked well, at least most of the time.  It was an amazing experience to get to that starting point through great innovation before which it “couldn’t be done”.  Everything is Plug and Play now.

Flash File System was also deemed impossible.  I ran numerous calculations to show that flash could be made to look like a disk since the READ speeds were very fast, like main memory and WRITE speeds were on par with magnetic disks.  But flash couldn’t’ work exactly like a disk, being overwritten in small sectors, as everybody else insisted.  We could write to it in very small pieces, bytes or words, much smaller that sectors, but it first had to be erased in very large blocks.  The basic concept is very simple.  You just treat those blocks like “WORM” [Write Once Read Many] media like CD ROMs.  There were existing file systems for those media.  Just that in the case of flash, you have to erase blocks at some point before you can rewrite them through a process called “cleanup”.  Take blocks having lots of deleted files, transfer the remaining good/active files to a different block, then erase this old block to turn it into a new (unwritten).  That’s it, that’s how your ubiquitous USB thumb drives and all your other flash storage works.  The “basic” of flash file systems.

Sounds simple, but the “powers that be” didn’t buy the performance numbers on my Excel spreadsheet.  To them, SSDs had to have the same logical and performance properties of magnetic disks.  It would take 25 years for Intel to invest in the “correct” technology and become the leader in SSDs.  Several other semiconductor companies competed directly with Intel in its mainstream NOR flash business while others focused on the “ideal”, more disk-like NAND technologies.

Long before those technologies were perfected there were customers who could see the value of solid state storage using NOR flash “as is.”   FedEx was revolutionizing the shipping business with its sophisticated tracking system enabled by handheld “wands,” barcode/data entry devices, built by a company called HandHeld Products (HHP)  These wands used the lowest cost SRAM devices with battery-backup for data retention, and very small form factors were enabled by small outline packaging.  HHP asked Intel to package its flash in these same packages for use in the FedEx devices since flash cost less than a quarter of SRAM.  This was a huge opportunity for my product group, so I argued in a strategic meeting for developing the packaging.  The general manager needed to see “real business” before funding that expensive development, but it was “chicken and the egg” – the customer needed to have the flash devices in the right packages so it could develop and test new flash-based prototypes.  I’ll never forget being shouted down “in public” for pushing a little too hard too long in that meeting, cratering my confidence since I was still new to the group (left me near tears I was so angry).  As middle managers discussed the opportunity with the GM, he came around, we built flash in the new packages, and we supplied the flash for thousands of FedEx wands.  This also paved the way for use in PCMCIA cards and other designs in other product groups also needing smallest footprints.   Again, Intel was not to enter the flash drive business for many more years and not with the technology we had, but just look at the meandering path of innovation, people building on what they have, adding new ideas, and coming up with better solutions.

At the same time, a company from Israel, M-Systems, was working in military applications and recognized the value of using solid state memory even though flash memory was far more expensive.   Rotating magnetic disks had much higher failure rates in high-G and -shock environments, so solid-state implementations were worth the price premiums in mission-critical systems.   M-Systems used our flash initially since it was the only flash technology in volume production.   Just as with PC’s, the simplest adaptation was to make it look like a disk, which they did!

At that time, Microsoft was too busy to implement the “real” Flash File System (per my proposal), which provided for reliability management in terms of recording the number of times each block was erased and re-used (10,000 cycles was the best guarantee Intel and other suppliers were willing to make).  Alongside the “cleanup” process to allocate new free space, my specification called for rotating blocks out of frequent-rewrite usage to limit the number of write/erase cycles.  But, as mentioned, this “real” FFS of my patent was never implemented.  Intel did not yet have larger flash devices with smaller erase  blocks in production, so the first PCMCIA “Series 1” flash cards contained a few devices that erased the entire device array at a time.  They looked like a collection of writeable CD-ROMs, and Microsoft already had a file system for that, so “FFS-1”, Flash File System 1, was just the CD-ROM “WORM” file systems using Intel’s low level device drivers to control the flash.  This was not at all what Intel or any of its customers wanted for SSDs, so we kept pushing Microsoft to build FFS-2, the “real FFS”, which, again, never happened.

M-Systems got started with military applications, building flash disk drives in small custom or miniature magnetic disk form factors and then PCMCIA cards.  Their marketing name for their file system was TFFS “True Flash File System,” a really interesting piece of irony.  That marketing name was positioned against Microsoft FFS1 which didn’t work at all like a disk.  TFFS did work like a disk, making it the “true” flash file system, but it wasn’t a file system at all.  When we discussed it as a standard in the PCMCIA forum, the software committee dubbed it “FTL” for Flash Translation Layer, which is what TFFS really was.  The software acts like any normal magnetic disk media device driver, so the file system is actually the same one the OS is using for disks:  FAT, NTFS, what have you.  The FTL software accepts the same commands used for magnetic disks, translating them into flash media specific operations.  Very clever work, which you can investigate on your own if you’re interested.

Here again, the meandering path of innovation is fascinating.   Intel wanted something custom-tailored dealing directly with flash solid state media, but the OS-level file system was not to be.  It entered the flash drive business building direct ATA hardware- and software- compatible flash drives.  Then exited.  Then re-entered some 25 years later with new technology to become a dominant supplier.   But in the small form factor devices, PCMCIA, then USB flash drives, it was M-Systems and then SanDisk.  I was concerned that nobody dealt with my “requirement” of write/erase cycle tracking, but that was also a flash device-centric viewpoint.  The NAND flash technologies of the other suppliers had random data errors that could conveniently be detected and corrected with the same techniques used for rotating magnetic media.  Error correction continues to be an important solution to this day as flash devices are pushed into very high density decoding multiple bits per memory cell.   The M-Systems approach, “just make it work”, determined the path of direct magnetic disk emulation which has proven its staying power all these years.   So good innovation is definitely shaped by building upon what is known and can be deployed, not necessarily the most novel (“patentable” ?!) idea.

There are several other good flash memory stories.  This will seem absurd in these days of 512GB flash micro SD cards with which you can expand your smartphone memory.   The ROM that stored the startup code, or BIOS, for early PC’s was 64K or 128K bits, the same size as early flash devices.  Industry convention was to use plastic packaged OTP (One Time Programmable) EPROMs in early production, then switch to lower cost ROM devices after the BIOS software was proven.  We thought flash-updateable BIOS was a useful idea.  As the Flash Product Line Architect, I proposed “boot block” architecture where a couple smaller blocks could be placed at the bottom (for Intel CPUs) or top (for Motorola CPUs) of memory to store the processors bootstrap code.  Once that code was transferred for execution in system RAM, the flash device could be taken offline so that the large main block of flash could be erased and rewritten.  I supervised one of my staff technicians to write code to do this exact thing over good old analog phone lines via modem, proving the concept.   We presented this idea to all the major PC OEMs at the time, but there were no takers.  Every penny counted.  Volume prices of ROM or even OTP EPROMs were $1 or less while Intel needed $8 to $15 to make a profit on the flash devices at that time.  We asked IBM how much it cost to change out an EPROM or ROM BIOS device if a software bug were found.  The answer was about $200 (required service calls to customer sites) but that BIOS is tested so thoroughly by the time production ramps up that they ”never have to update the BIOS”.  A few months after that sales call, there was a BIOS bug in one of the earlier PS2 PC models that required field replacements in more than 150,000 systems already in customers’ hands, and the service call costs were closer to $300 per system.  You can do the math (!!), but adding $10 more to the motherboard cost for a flash device immediately made sense, just in case a BIOS bug that “never happens” actually happens.  Flash made it in the next design cycle, and within a year it was utilized by all the major PC OEMs.

Another flash application idea was to replace EPROMs used in automotive engine controllers.  I arranged a meeting to present flash memory to members of CARB, the California Air Resources Board.  CARB determines CA emission standards, which are more stringent than in the rest of the US.   I thought that it would be useful to update the engine emission control code when better software routines were developed, but they were in shock, like “hell, no!”  Exactly what they don’t want, what the hot-rodders like to do, swapping out the EPROMs with “hop-up code” that blows away the emission controls to yield better performance (still occurs today to some extent, but now with flash).  They were interested in writable non-volatile memory, asking a question I didn’t understand: “Can you freeze a fault?”  “What??”  “Can you store an event when some part of the emission control system fails?”  Sometimes automotive components fail intermittently, and the car might even pass a CA SMOG test, but the component may fail much of the time leading to added emissions.   I told them “sure thing!” and explained how that could be done in the “boot block architecture” flash devices.  They made a ruling that flash had to be designed into engine control modules of all vehicles sold in California two years from then, i.e. changing their current design immediately given their long development cycle.  The “Big Four” US manufacturers made the switch right away, with Japanese and European manufactures following shortly thereafter.  Just like with PC BIOS:  Boom, and a whole industry converted virtually overnight (in terms of product design cycles).  Years later I was miffed to pay $500 for the dealer to replace a $75 O2 sensor, something I did myself on my old 1980 320i.  There were two very inaccessible sensors, I didn’t know which one, and I was out of time to pass the SMOG test.  I had done some work at Intel on the CAN (Controller Area Network) now widely used in automobiles, and you can buy your own low-cost diagnostic code reader to plug into the CAN socket to find out what is ailing your car – – good tip even if you don’t do your own service: comes in handy negotiating with the dealer service department.   Oh well, it brought a smile to my face recalling how the OBD (On Board Diagnostic) flash storage that caught my intermittent sensor got designed into my car in the first place!

Meanwhile, in that same timeframe beginning in 1987, Apple started to develop a small form factor computing/communication device called Newton, defining a new category John Sculley dubbed “Personal Digital Assistant”.  Early protypes used miniature magnetic disks, but flash was obviously a good candidate for small form factors and low power.   There were many useful concepts, most of which were ultimately realized in the modern iPad.  The Newton had many issues in final development and made its late debut and short production run in 1993, followed by turmoil resulting in Sculley’s departure.   Like all things Apple, the Newton was graphics interface oriented and a key feature was handwriting recognition, which never worked as well as hoped.

Years before that, in 1982, even before the IBM PC, a company called Grid Systems was making the first clamshell notebook PC, the GRiD Compass 1101.  As our first flash devices were entering production in 1988-89, we called on Grid to try to sell them on flash SSDs versus magnetic disks for the first tablet PC called the GridPad.  We met with their VP of Research, Jeff Hawkins, who left Grid in 1992 to start up Palm Inc.   Here’s where my story of flash memory diverges a bit to other innovations and “small world” irony.  The Palm Pilot was the first truly successful PDA, and I owned a couple later incarnations.   Before smart phones, these PDAs were the other half not handled by the basic cell phone – great for jotting down appointments, notes, lists, storing and reading files, all of which you could synch to your desktop PC.  Here’s where great innovation prevailed, delivering capabilities promised by the Newton.  The PDA did not need to learn how to read random haphazard handwriting if users could learn a shorthand for text entry.   The original Graffiti was brilliant.  As Wikipedia reports:  Hawkins recalled his insight: “And then it came to me in a flash. Touch-typing is a skill you learn.”   You could learn it and use it quite efficiently.  Of course now we can go much faster with smartphone touchscreen keyboards or voice-to-text dictation, but this was the best way to do it with the technology of that time.  To me, those moments of great insight are magical, something we would develop into a ready skill if we could.

Speaking of basic cell phones, Intel NOR flash had dramatic growth in that industry, supplying the nonvolatile memory for devices made by Nokia and all the other major suppliers.

Jeff Hawkins and his Palm founders left in 1998 to start Handspring, so the next parable in these tales of innovation is about what happens when you discard what made your original idea great.  The brilliance of original Graffiti was using a single keystroke, so that each keystroke started a new character, best utilizing a small screen space.  Graffiti was replaced by Graffiti2 which was actually a system named Jot developed by Communication Intelligence Corporation.  Some claimed that this system was “better” and “more natural”, so I assumed the typical “new guy syndrome” (who has to change everything) was the reason for the change, but it was driven by the need to get out of an endless legal battle with Xerox.  In any case, relative to Graffiti1, Graffit2 was anti-innovation, throwing the baby out with the bathwater.  The “natural” characteristic of Graffit2 was using two strokes of the stylus the way you do with a pen and paper, but this was a bad idea.   A single downward stroke used to be “I” and a single-stroke upside-down L was “t”, which I never got wrong in Graffiti(1), but Graffiti 2 forced you to dot the “i” or cross the “t” within a small time window or you ended up with the number “1”.   [per Wikipedia: “i” and “t” are the fifth and second most frequently-used letters in English, hence a frequent problem].  The letter “k” also required a quick second stroke.  The error & re-do rate for text entry on my (final Palm) TungstenT5 was such a pain that it became an annoyance to use.  Xerox’s Palo Alto research lab invented key innovations deployed in Apple’s revolutionary McIntosh PCs (i.e. Graphical User Interface and Mouse) and a stylus text entry language they accused Palm of infringing upon.  Xerox never manufactured products in these categories, staying focused mostly on their mainstream copier and document processing businesses.  This represents the dark side of patent law to me, “trolling” to extract penalties or licenses instead of furthering the art.

In addition to flash, I also worked on neural network prototype chips based on EERPOM technology, so I became very interested in AI.  An early customer for those devices was, Nestor Inc, which became Nestor Traffic Systems, which developed the AI used to detect red-light traffic violators.  A small world experience I would come back to.   In my ongoing AI learning, I read Jeff’s book “On Intelligence” in which he described his theory of the brain’s neocortex function.  This is the basis of his work in “biologically inspired machine learning technology” at his current company, Numenta.  I was fascinated by “On Intelligence”, thinking “Yes!!  This is exactly how our brains work!”  In spite of having met with him three decades earlier, I just learned more recently that I was a college classmate and former Intel co-worker of Jeff Hawkins.   Too bad I was so busy with the Intel work because I would have enjoyed being on Hawkins’ teams.  I’ll settle for being a fan and following what comes out of Numenta.

Around 1993 we got to play with early prototype “smart phones” from Nokia using an early primitive version of Windows.  Devices, cellular networks, and software were so slow that these seemed like useless toys.  Shows a serious lack of imagination on my part not seeing what smart phones would become.   Even having foreseen technical capabilities, nobody could have predicted the huge social impact they would have.  Before smart phones had such great cameras, digital cameras replaced 35mm film cameras.  We supplied flash for Kodak digital camera prototypes, and they had a huge early advantage.  Kodak’s systems were superior in compressing and reproducing digital images with lots of experience in very high-end digital cameras.   But they sat on that great technology, not wanting to erode their huge chemical film business.  They didn’t appreciate the laws of high-tech evolution requiring eating one’s own children, so by the time they entered the business, they were just middle-of-the-pack.  Flash devices had grown larger and cheaper so that Kodak’s advantage in JPEG compression became negligible.  At least it was fun for me visiting family and friends near my home city Rochester, NY, while we worked on those early cameras.  We also built a prototype E-Book reader demo for Sony that was the exact size of my current Kindle Paperwhite.  Fatter and heavier with a battery life of about 2 hours.  Obviously way too far ahead of its time.  Like Flash, it took over 20 years for battery and “E-Ink” display technologies to yield the modern capability of hundreds of books and weeks of battery life.  Exciting times, glimpses of the future.

Looking into the future was my job.  I presented the 5-year strategic plan for Flash Memory in 1995 showing the key markets segmented by application, projecting (a bit prematurely) that solid-state disk applications would take over by 1999 so that Intel would have to develop the NAND flash technology.  That was a non-starter:  no way could funding be secured for both technologies.   Turns out that management was correct on that call, and the NOR flash business was profitable for many more years until Intel finally began developing NAND flash in a joint partnership with Micron in 2007.  Intel became the SSD leader in 2015, so my 1990 vision took about 25 years.  

Even 5 years was too long for me, so I went to pursue a strategic planning job in the chipset group.  Very interesting getting into the mainstream PC business after more than a decade in the “outsider” world on nonvolatile memory and embedded computing application.   Chipsets are the “glue” that connects CPUs to system buses, memory and I/O.  High-performance graphics processors (GPUs) need direct access to gobs of main memory and their own dedicated graphics memory.     Back in the 1997-1998 timeframe, Intel was developing products for the graphics component business, and highest performance machines needed the best CPUs, GPUs and memory for best graphics performance.  Meanwhile Intel had to service the low end of the PC business with a basic graphics solution.  We worked on a chipset with a smaller version of the GPU embedded inside, with precious pins dedicated to the interface for its local graphics memory.  Customers wanted the option of discarding the embedded GPU to use an external, faster GPU, but there was no room for extra pins needed for that interface, the AGP bus.  Quite a dilemma because more pins cause the silicon area to grow much larger than what is needed to hold the transistors, so the engineers puzzled over building various versions or that chipset with the added development costs and schedule.   Well, I’m “just a marketing guy”, what do I know, right?   At least that’s the stereotype most engineers have, but I had a good background in engineering and could speak both languages.  During this product proposal session, I noted that most of the pins for the AGP interface were exactly the same as the SDRAM interface used for the “embedded graphics on” version.  So why not just change over the function of the few different control pins to AGP when you configure the chip for external graphics?   So simple, just asking “why not do this?” The engineers thought that was a great idea.  It solved their problem and they acknowledged my contribution, adding my name to their patent for the chipset.

Following chipsets, I worked as Strategic Planning Director for the new Graphics Component Division (GCD).  Around this time, Sony had made great strides in graphics performance with its latest Playstation console and they used their graphics patent portfolio to put the squeeze on this new competitor at Intel.  Low and behold, they used a small flash-based cartridge called “Memory Stick” to store games and other software.  Intel attorneys asked me to review their specification for potential infringement on my FFS patent.  Sure enough, several basic concepts of FFS were in use, the first example demonstrating the “basic” nature of that patent.  A cross-licensing agreement was negotiated to the satisfaction of both sides.   Intel would not seriously enter the SSD business until long after my patent expired, so this was a great example of how patents should work.  They provide for collection of reasonable licensing fees or exchange for rights to patents of similar value from other innovators.

The end of GCD turned out to almost be the end of me, but that’s a whole ‘nother story.  I moved to the Mobile Platform Group where we debated solutions for very low power, small footprint systems which included various options for integrating entire graphics chipsets with the CPU.  The focus was on operations per watt at low clock rates to improve battery life versus the mainstream desktop mentality of “Gigahertz, Gigahertz, Gigahertz !!!” which was becoming extremely difficult to build and keep cool.  The best solution emerged in the form of a new CPU architecture with very low power consumption.   Improving desktop performance eventually required multi-core CPUs and true multitasking operating systems, and that low power core was ideally suited for that new role.   Cool stuff.  Ironic that Intel ceded the Graphics business to Nvidia and others since GPUs are now a huge percentage of Amazon Web Services servers since they are best suited to the rapidly growing AI applications.  This concludes the fun stories of my Intel days.

More recently I wrote and was granted a pair of patents for using AI in traffic control.   Drives me crazy to come up to a traffic light and sit there when a smarter controller would just change the light for me.   And how about being on a motorcycle or bicycle that the ferro-electric sensors can’t detect?   Why not have video and various other types of input and use AI to determine when any type of traffic wants to get through the intersection?  Then if you are already using AI to recognize different types of traffic, you can use it to decide which kinds of traffic should get priority.  After thinking about this for years, I finally started writing a patent in 2012, the first part of which was granted in 2013.  To me this was so obvious that I was surprised finding no prior art, that my patent application was the first to make these claims.

In my search for prior art, I found several patents by Nestor Traffic Systems, including one with a claim for preventing collisions at intersections when the AI predicts that a red light violator will barrel through the intersection.  To prevent the often fatal “T-Bone” collisions, you just need to keep the lights “all stop” (red) instead of giving the green to the adjacent perpendicular lanes.  But this isn’t being used.   The inventions of those AI patents are used for red light cameras that detect and generate citations to violators.  I asked the developers at American Traffic Solutions (ATS) which now sells those systems why this capability is not used.  The AI red light cameras systems are standalone and used to generate citations only, not integrated with the traffic light controllers.   The answer given by the municipalities is that they fear hacking of the traffic lights.   My proposed system has that “hold the all stop state” critical safety feature plus I spell out specific hardware implementations that prevent hacking.  I haven’t begun to fund development much less argue the governmental issues to get these ideas implemented given what ATS learned.

I attended a patent licensing forum discussing the challenges and opportunities involved with autonomous vehicles.   There was some good discussion of topics such as “who is liable, Ford or Apple if/when their systems are integrated?”   But otherwise it was a bunch of lawyers wondering what people are going to do when AI is doing all the driving.  Geez, what do you do on a bus, a train, or an airplane?  You read, watch a movie, converse, snooze, whatever.   I tried to get their attention on AI traffic control.  What good is a smart car if it is stuck behind a dumb traffic light?   It will all be great some day when all vehicles are AI and connected to networked traffic control, but what about all the “conventional” traffic, we dinosaurs who like motorcycles and old cars, until then?  Nobody interested.

I had thought that one of the major AI car players might want to invest in AI traffic control, so I reached out to all of them.   Nope.  Autonomous vehicles, that’s where it’s at.  Focus on the big prize money.  The traffic control challenge is huge, and there is huge variation in requirements going from dense city traffic to sparse traffic in rural areas where a traffic circle is the best solution.

In any case, there are a lot of good ideas out there, and a lot more will pop up if we make solving these problems a priority.   Relative to society’s immediate critical needs, a hard sell, but moving forward, new innovations can improve lives and create jobs in the process.

Hence the biggest piece of fostering innovation may be the funding and gaining approval.  Look at what Elon Musk and Jeff Bezos or Richard Branson can do now using the funding from their businesses.   Paving the way for further scientific experimentation and exploration in space!   I loved working in technology but apparently I should have focused on making lots of money first.  I was never interested in having the lifestyle of the rich and famous, but I sure would love to have the funding to turn my ideas into real products.  Maybe I can talk one of those guys into siphoning off a little cash to build AI Traffic Controllers.

A useful lesson for innovative people is to pay attention to any and all exciting new developments you see and then to join the team if you can.

2 replies on “Innovation Tales”

What a fascinating trip down memory lane this was! It has been fun and interesting following your career and endeavors all these years, living, marketing, and selling the fruits of yours and Intel inventions since I joined in ’73 and exited in 2016. So wonderful you can recall all these details. An interesting footnote, way back in the ’70’s, when I was at Memory Sytems (MSO) there was an engineer and good friend who has recently passed, George Maul, who actually had the first idea to use CCD memory for photo capture. I’m not sure he was able to patent it or not, I think he was, but Kodak ended up with it, likely no interest from Intel management other than selling memories to Kodak. So many wonderful world changing things that came out of Intel! I will now look forward to the next set of dreams of working at Intel in the old days with memory refreshed details thanks to saga’s like this! March on my friend!
Chris Feetham

Thanks, Kurt. This reminiscing is reminding me of idea generations sessions from years past, and the chase resulting from them (or the frustrations of having ideas 20 years ahead of the technology or market to exploit them).

Glad to hear your Traffic AI invention gained protection. It was great talking to you about it during the process.

Leave a Reply

Your email address will not be published.