Contrarian view: these days I don't care how much a microcontroller costs. In quantities I'm designing for, it doesn't matter, and the cost is dwarfed by the cost of my time writing the software.
I've spent a lot of time dealing with crappy and broken vendor libraries, erratas, SDKs and toolkits where the vendor changes their entire strategy every year or so, and I'm tired. At this point I wrote (I thnk) three I2C libraries for various devices and systems (they're on github) — I should never have to do that!
These days I mostly use Nordic chips — not only does almost every device I design need BLE these days, but the chips are fairly nice, the SDK is developed, maintained and supported, and there is a refreshing feeling of sanity. Sure, a BMD-300 or BMD-350 module with an nRF52832 will be $10 instead of $1 or $2, but I will save so much time and frustration, that it's definitely worth it!
A vendor starting to officially support Rust might change my perspective, but to this day I haven't seen any other vendor take software as seriously as Nordic does.
I think Espressif takes software fairly seriously, I really like ESP-IDF. As for Rust support, they're working on adding Xtensa support to LLVM [0] and someone has used it to build Rust [1].
It's not official support for Rust but it's official support for LLVM upon which Rust is based, which works too.
I'm new to the microcontroller world and am using ESP-IDF on the recommendation of a friend. Espressif's example code is pretty good! I was leaning towards STM32, but the ESP32 seems to do everything we need for now... pretty nice device for $5 or whatever.
> In quantities I'm designing for, it doesn't matter, and the cost is dwarfed by the cost of my time writing the software
For low quantities I do that too, but once you get into 100s of 1000s or millions of units, every $0.01 more profit takes out any developer cost. We are at a few dimes per mcu and it makes a massive difference compared to the previous mcu which was just over a $1; you can hire a boatload of developers for that and not even notice it.
Offtopic curious: from your profile: C64 BASIC, C64 Assembly, Logo, Pascal, C, C++, Scheme, Perl, Common Lisp, Clojure, ? => assume you use c/c++ for these controllers? I'm c or arm asm (our current controllers have 24kb free) but always interesting if people try to actually use anything else than c/c++. Like you say; maybe Rust, but it doesn't fit too nicely in your evolution => ? imho.
Great points. Another metric besides solid SDKs with support and community is power usage. In fact, some sort of benchmark of lowest state consumption, and total “MIPW” (million integer Ops per watt or something similar) would be valuable. BTW The new espressif s2 has some low power Sleep modes that should allow long running times on a battery( no BLE tho...:-/ ); I think that’s more of a hard design constraint that economics can’t change. Cheers!
I don't think that's a contrarian view. Engineering time vs. BOM cost has long been an understood trade-off in component selection. It's even becoming a common position in software-only projects to scoff at the idea of performance-oriented design decisions; without doing so much as some quick napkin math to evaluate impact to hardware cost. They just buy more AWS time.
There's definitely a market for stable, long-term supported (in terms of documentation/email/phone, and manufacture) hardware.
I'm working on a multi-year project with many subcomponents and I'm tired of vendors upgrading and obsoleting. Version x worked just fine for my needs. Version y gains me nothing and adds risk and time in testing.
I think this highlights how important adoption curves, cost curves and volume are to the overall context.
Going from $1000 to $100 can put something into the hands of experimental, hobby or light-commercial projects. It can enable new applications. These can be fundamental to setting an overall direction. Going from $10 to $1 is neither here nor there in those contexts. Cost, as you say, is virtually irrelevant unless/until volume is very substantial.
Volume is not usually part of the early adopter game, and price, below a certain threshold is only relevant at high volume. Price (below some threshold) may never be a meaningful factor for a microcontroller built to control commercial lighting systems or whatnot.
The old habit is that price really matters. As prices dropped, this opened up entirely new uses for microcontrollers. Applications that would have been unreasonable at $100 became possible @ $10. That's not the dynamic anymore.
There are things I would build many of if they cost $2 instead of $20. Say a medium-resolution 3D volumetric mapping of temperature and humidity in my house becomes reasonable at $2/node, but not at $20/node. I think scale still applies at this level, but fewer hackers are thinking in terms of swarms. There are transformative applications at scale that could be interesting even to individual hackers, but scale is its own niche, just as IoT in general is a niche.
True, and I think that sitting back and thinking... we will come up with such applications. These aren't hard rules.
It's a matter of extent though, not absolutes. Hobby, experimental & light-commercial applications do not often need millions of units. If they don't, the difference between a $1 & $10 controller doesn't matter in the way the difference between $10 & $100 matters... in more cases than not. Other considerations (eg the dev already knows the more expensive platform, maturity, etc.) can easily override
High-volume swarm applications could change the reality. You need a nexus between low end and high volume. Swarms might be that. These aren't that common though, and "high volume and cheap" is usually a large company thing.
IE, if you browse around kickstarter, most microcontroller applications fit into OP's scenario. The cost of the microcontroller is not that operative. There are undoubtedly some exceptions.
I don't know how to build an $2 device. Especially given that in any small device I've designed, power management always took 30-60% of board space and device cost. You reach the upper bounds if it uses a Li-Po battery.
There is a big gap between powering a hobby device from a bench power supply and building an actual end-user solution.
EDIT: I'm reading the answers here and it's clear I did not clearly communicate what I meant. Taping a board to a bunch of AA batteries is not an actual end-user solution. Neither is hanging a couple of boards off a bunch of wires. I was talking about products, something you might expect to pull out of a retail box and use.
Those mentioning PoE clearly haven't designed a device that is powered with IEEE 802.3af-2003 :-) (I have, and I have the battle scars to prove it). PoE doesn't mean you can just connect the wires and call it a day.
In a way designing a $2 device is simpler, as the low price means a low component count.
Having built a Dash Button clone with an ESP8266:
1. Choose a microcontroller that will run on <3 volts, and power it directly from two AA batteries in series. You have neither the power budget nor the financial budget for anything more complicated than that.
2. Your microcontroller should spend 99.99% of its time in its lowest power sleep mode. When sleeping, the microcontroller should consume <10μA and the other, non-microcontroller circuitry should consume the same amount.
These are rock-bottom power levels, expect to remove any status LEDs, USB-serial converter/in-circuit debugger chips, and even linear regulators compared to a development kit.
You don't have to use an ESP8266 like I did - a more experienced guy might have chosen an MSP430 or something similar.
3. Given you can only leave sleep mode 0.01% of the time, i.e. 8 seconds per day, if it takes 1 second to connect to wifi, send a message and get an acknowledgement back, you can only do that 8 times per day. If you want to update more often, you'll need something faster to connect and send a message. This is why many smart home systems use 433MHz to a base station, rather than using wifi directly.
4. A $2 budget won't include a case until you're making fairly serious volumes. Repurpose something you were throwing out, or wrap it the circuit in electrical tape, or go over budget but declare victory as you could hit the target in larger volumes.
You can use ESP-NOW instead of WiFi, although I'm not sure how quickly that connects and how much power it uses. I think it's much less than WiFi, but do double-check.
I don't know how it turned out, but a friend of mine was talking about running a DC power loop round his house alongside the AC one. The idea was that he could pull 5-12v lines off that for powering things like motion sensors, or even tablets being used as control panels, rather than having the house scattered with USB wall warts.
I'm not sure. Though I see a whole lot of chatter from chip makers trying to sell solutions into that market. Sounds like they have figured out you can run lighting and and other IoT stuff off it.
If you think about it most current IoT stuff blows. You need power for the device. And then you're using shit like WiFi for data. And even worse shit like Bluetooth for configuration. Security is a big big problem because it's brittle and conflicts with consumers need for stuff to just work. PoE solves all of those issues.
The single pair ethernet web pages gave me the impression that it is aimed at industrial applications - the connectors alone look like they would cost $2
A lot of businesses need to demonstrate that a product is margin-positive...
That means if you are making a new gadget to send out to users, and hoping to make money on data collection, a subscription service, ad revenue, etc., the unit cost of the hardware is critical to the business case, even if it isn't much money overall.
Try telling investors "we made this weather station, and we will make money by selling data to NOAA".
The very first question will be "how much does each station cost", and "how much will NOAA pay". Nobody asks how many engineering hours you spent writing the code to fit on the microcontroller, because the assumption is that in any successful business that becomes irrelevant.
Perhaps investors should accept "Each weather station costs $1000, but we think we can get it down to $10". But most investors won't believe you if you say that...
But to be operative in this way, you must be talking about a very high volume. Millions of units, or close.
My point was not that cost never matters, just that $1000->$100 is not the same as $10->$1. If your weather station startup is using $10 microcontrollers, getting that down to $1 is probably not important. If you are building 10 million weather stations, it is important, but this is a pretty specific scenario that doesn't apply often.
We say something similar with PC pricing. The cost of a functional PC never stopped falling, but that stopped being as important once a certain threshold was reached. Projects like RasberryPi do take advantage of continued cost reductions, but Pi is marginal relative to earlier cost reductions cause an explosion of PC use cases.
This is a good point. I actually always sort by price when selecting components — not just to be cheap, but also because less expensive components are generally more popular and widely used, which means they are better support, better availability and better longevity.
But — to all those who worry about price — I would suggest looking back and checking how many devices you actually built in the past? Was it singles, tens, perhaps hundreds? At these price points, how much would you save using a $1 uC instead of a $5 uC? Is it tens of dollars? Perhaps hundreds?
Now compare that to the cost of your time spent developing and maintaining the software. At any reasonable hourly rate you pick for yourself, you're probably better off going with the uC that has good software out of the box.
If you decide to build your devices in commercial quantities, that's when price optimization begins to matter, and somebody will care about fractions of a penny. But that somebody won't be you: going beyond single thousands involves a different set of skills.
Even in the 10s of thousands, these price differences should probably be a secondary issue for many commercial contexts... at least at the startup end of the game.
If you are producing a $40 microwave, smart bulb or somesuch... shaving dollars off component costs is operative. I think this supports your original point rather than detracts from it. This is the mass production game, a mostly big company game.
Microcontrollers have reached a cost point where the dynamic has inverted. The people who need to care about a new, cheaper-than-ever mc are large companies, not startups, hobbyists and niche product designers. There will be lots of exceptions, especially hobbyist use case... but generally.
Interesting take because I see the Nordic HAL (nrfx) as a little heavy. Just tracing out toggling a GPIO it’s at least 6 function calls, the register is abstracted to some unknown pointer called reg or cb (control block, not callback), they don’t ever seem to intend anyone to trace through their driver.
I like the Nordic chips, the event system and DMA is very good. But even their events are abstracted out where you aren’t allow access to the ACTUAL interrupt, you get to send a callback that on ISR they promise to call for you with a context pointer when they allow it (SPI) and when they don’t you are in your own (GPIO event callbacks).
Yes, Nordic does take it “seriously”, but also heavy and complicated. I’d love to see a light implementation of their drivers instead of a HAL that is supposed to cover all models all time. No one actually switches chips like that.
I'm also worried about nrfx. But I think it is the price you pay if you want to have a BLE/Mesh stack running. You can't really go all bare metal anymore.
The way I see it is a spectrum: on one end, you go bare metal and write your own startup code, TPM0_IRQHandlers and implement core stuff in assembly (been there, done that). On the other end, you run Linux and who knows what's happening on your extremely complex system and whether it will run at all the next day. I think Nordic is trying to find a balance inbetween.
Nrfx has some stupid decisions made in it. Like #if #define where if you don’t “enable” something the variables aren’t ever defined. Which means for me I have some SPI code that throws errors when SPI INST 2 isn’t enabled. My code is still valid, it’s just that variables that used to exist no longer do. Annoying. It doesn’t cost anything to “enable” the peripherals in preprocessor.
It’s ~47 instructions in the example I just checked to get a return from nrf_gpio_pin_read(some_pin) that’s not good. It also reminds me that nrfx and nrf aren’t the same and when you need to use one vs the other is anyone’s guess.
I do like their ASSERT() and use it for my own things. It’s report function is weak so i can overload it with one that reports to my other FreeRTOS threads.
Nordic isn’t bad, but they went a little overboard with the HAL. I don’t want “one code” that runs on all chips poorly. I want chip specific code that runs great and yes I’ll have changes to make if I switch chips, but that’s life!
Not only can’t you go bare metal with BLE and real complexity, but with a modern RTOS and knowledge of when you need a thread there is no reason to on chips with 48Mhz and 512kB Ram.
FWIW, I do recommend Amazon FreeRTOS but investigate their LTS version they are moving to if you look into it.
Random question: do you happen to have any experience with any of the Dialog Semi range of Bluetooth ICs? (E.g. this one: https://www.dialog-semiconductor.com/products/connectivity/b...). They're ridiculously cheap but don't have internal flash. I haven't had occasion to try them out so far, so I'm just curious if anyone else has.
No. For uC Bluetooth, I only used Nordic chips. For general-purpose uC I have lots of experience with TI MSP430 and NXP/Freescale Kinetis K and KL lines. Nordic has by far the best SDK and support of those.
Having seen the complexity of BLE and Mesh, I would be very wary of using a vendor if I didn't see excellent software support. For example I wouldn't even consider Kinetis for wireless solutions, having seen how bad their SDKs were.
Hardware is relatively easy to get right, software is hard.
I do, their SDK is OK... but Nordic is the clear leader on tooling and SDK quality. The DA14531 has OTP memory for the final firmware and you can program and run out of RAM for testing. The firmware is typically mirrored to RAM anyway so this is pretty much indentical to how it works after programmed.
Oh, interesting. I had assumed that the usual dev setup would involve external flash rather than running the code from RAM. That sounds quite a bit easier. It seems like Nordic is the way to go unless you really need to shave every cent off your BOM cost. Thanks for responding.
I couldn't find a scenario where Wi-Fi would add value as opposed to BLE. It consumes more power, forces you to use TCP/IP, which pulls in lots of complexity without great solutions (you need to deal with pairing, security, etc), is more complex to set up... I could go on. Solutions like Thread are relatively recent (in fact I still don't know if licensing has become more reasonable).
BLE and Bluetooth Mesh give you tons of functionality with low power, in a complete solution. Do I really need my devices to have an IP address?
I guess this depends on what you're building, but in most cases this is about home automation, sensors, lighting, toys, etc.
Say I'm doing home automation and I decide to go with BLE and Bluetooth Mesh (I didn't realize this was a thing). What's the recommended chip/device/design to bridge this Bluetooth network with "regular" computers?
I can't speak to BLE/Mesh, but the Thread protocol (which Nordic also supports) has the concept of a "Border Router". Which is exactly what you're looking for.
I have such a soft spot for PICs, specifically the 16F84
20 odd years ago when I didn't know any better and work was boring I built a universal remote control - and put it in a PSX controller case. This was all coded in assembler which I still have, programmed with a DIY adapter that ran off a parallel port. So much fun. I used this for a few years.
Oh me too! Springer had an amazing book on PIC16F84 programming that was my self taught low level coding education. Pre-Arduino, the PICs were the bomb, you could order free samples, download the compiler free, and build a programmer for like $20 in parts. Fond memories!
Me too! My dad bought me a PicStart Plus and a bunch of PIC16F84s for my 15th birthday. I tried to badger him into buying me the Hitech-C compiler, but he never budged, so I was stuck with assembly. But that was a good thing, in hindsight. :)
I use them for PCB's - I don't have access an local version of OSH Park, so if I have boards fab'ed locally I am paying though the nose for them. (If anyone has a link to a decent OSH Park alt based in the UK I would love to give them a try)
If I need them "ASAP" even tacking on the £17 DHL shipping for 2-4 day shipping they still come out much cheaper than purchasing locally, but I normally just choose "standard" postage which means it takes ~10days from ordering until I have the boards in my hands but drops the shipping cost to ~£4. (shipping costs vary based on size and weight of the order)
I've yet to use lcsc for components because I've been burnt in the past from ordering IC's from china and when ordering anything over the value of £15 I often get handed a £8 handling fee from the postal service to process a few quid of import VAT which always grinds my gears, so I tend to shop locally for components. But I do plan to give them a try at some point.
EDIT in ref to jlcpcb: Just keep their capabilities in mind and set your EDA/CAD software to mm and not mil to save headaches if you are planning to use BGA's in your design.
I tried AISLER once, the PCBs I got were very good quality. They didn't work due to me using tolerances that were too tight (I couldn't easily find their tolerances on their site), they refunded me immediately and I re-ordered the fixed boards. Those were also very good quality.
Overall, I have a very good impression of them. If the price is acceptable for you, I would definitely recommend them. Shipping was very fast, too, the boards arrived within the week to Greece.
That still doesn't look very usable, as someone who designed a board, I don't know what the milling and drilling diameters are for. I know about track size, track clearance, minimum via diameter, etc.
The way they have it, I need to know about how PCBs are fabricated to read it accurately.
Well it was more that recently chris gammell was designing a board, had it set to mils while doing his via’s (iirc) to break out the bga pads and using the specs listed on their site.
But the conversation from mils to mm introduced a rounding that just pushed him over the limits which then caused the automated DRC on the site to reject his board even though it was “in spec” according to the specs on the site (as they list both mm and mil).
And it’s not like you can just get them on the phone and get it nudged though even though some via are a few thousands of a mm out of spec but should be fine irl. The computer says no, so you gotta go back and redo a ton of traces, where a local shop running on higher margins will prob just push the order though after a brief phone call.
I got some SK6812 V3 LED'S or whatever the RGBW 'Neopixel' things are from there a while ago- really nice ordering experience, I don't remember having any problems with it, and they include datasheets, which is awesome.
It looks like they mostly drove the price down by reducing functionality. There are no "fancy" interfaces like UART or i2c. There's only 64 bytes of RAM. The chips are one-time programmable which is cheaper than flash memory, and there's only 512 bytes of program space. The CPU is pretty limited too - it doesn't even have an instruction to multiply two numbers. There's no debugging interface on the chip, instead you buy a simulator device that behaves like the chip, but has reprogrammable memory and a debugging interface.
It all adds up, or in this case it doesn't, and what's left is a pretty inexpensive chip.
Took about 10 minutes to get an LED blinking, about half of which was just figuring out what (C language) USB libraries I needed to install to get the "cargo flash" command to compile. I used to do a lot of microprocessor development in C and ASM. This is the easiest experience I've ever had in getting one up and running with fully open source tooling (many microprocessor vendors do have quite good closed-source tooling).
While I haven't actually done a real project yet, the hardware abstraction libraries look much safer to use than any of the C/ASM toolkits I've used. The reason being, the API's are designed to leverage Rust's borrow checker and other safety features to make sure errors in use and configuration of the hardware are caught at compile time.
No. If you look at an STM32 manual, they can be > 1k pages as you need that much information including the errata to truly use a part. MCUs with < 100 page manuals will rely on YOU finding the bugs and working around them.
Even with STM32, there are some peripherals that can be difficult to use without careful reference to the application notes, errata, and/or ST's sample code. The I2C peripheral on the STM32F1 is a prime example -- if you don't read/write certain registers in precisely the right order, the peripheral will lock up or return incorrect data.
On the whole, though, you're absolutely correct. :)
In further agreement, part of getting to know a new MCU is to breakdown your project into a series of small conformance test suites and at the same time, trying to get into the mind of the peripheral designers. Delays in config registers, changes not becoming visible until some other action. Weird interrupt behavior, timers, capture compare, reset, brown out, etc. They all have issues, and one can't just assume that _anything_ works. Everyone blinks and LED, sometimes multiple times on the same project. The firmware you use to debug the chip and the system is the stuff that keeps you sane.
Design systems for visibility and debugability. Multiple color leds, extra serial ports, extra flash to dump memory to, an external control MCU that can handle DFU, serial port access, monitoring, etc. Use the largest memory part that has the same pinout. Building a project that is going to ship qty < 100 with the smallest, most resource constrained parts is a foolish thing. Spend an extra $2 and get >256KB of ram. Get remote debugging working in the first week. Automate relentlessly.
Not all. Some parts require proprietary tools to build a full firmware image for the chip, to debug the part, or to write to the target's onboard flash. Thankfully, these are all getting less common -- but there's still some out there. For instance, the Cypress PSoC requires Cypress's tools to generate configuration data for the part's configurable digital blocks -- there isn't enough information in the reference manual to do this yourself.
At least the Cypress tools are freely available and well documented. I have built probably 7-10 products using the PSoC, the tools worked well, resulting firmware solid. Don’t know why you would use Cypress as your example?
Because you can't not use their tools. The register TRM is deficient on some details of the UDB structure like routing, which is essentially mandatory for use the part. (You can technically program the part using only hard peripherals and GPIOs, but that leaves you with a crippled microcontroller.)
Yes, in the sense that even the Chinese no-name parts do have to comply to some kind of standard.
The selling point with lines like STM32 or NXP is that they have lots of good documentation, Application notes etc.
Simple example, I was looking at trying a weird Chinese part with a hardware NN accelerator but the entire documentation available (Chinese or English) was 10x shorter than the documentation on the serial peripherals alone on an NXP part.
I think there are free PIC assemblers but no free C compilers. All the tools needed to actually load code onto the chip are either proprietary or don't work with the modern programmers.
No great loss in my opinion. PICs are old and slow, and the C programming experience is much worse than an STM32.
Interesting re: pics. That might be why people I know who use them used assembler. I did use them once, and used assembler myself, ages ago.
I'd say avr8, msp430 and stm32 cover the full range.
avr8 (atmega/attiny) is 8bit, very easy to understand, has excellent open ecosystem, the go-to if you don't need the other two. There's fancy new xmega stuff, but at that point, you'd look at stm32.
msp430 is 16bit, smaller rom/ram sizes but very low-power and excellent open ecosystem.
stm32 is 32bit, Cortex-M0+ based in current generation,
The STM32 is even better than that because the family scales up and down from M0 to M4 or higher without much of a change.
Plus the free libraries that ST has made available are magical. Just - incredibly good. Best embedded C experience I've ever had. Yes using the chip in a more 'bare metal' scenario can be challenging (set 4 registers in the proper order with obtuse values to route the clock signal properly), but it's usually not necessary since the provided libs are so cohesive and comprehensive.
I tried the official stm32 tools (code generator) at work and found them horrible. But Zephyr supports some of the boards which makes it worthwhile (for me) to use for quick prototypes.
I’m a vim+gcc user and have never used an IDE in my life, yet even I can’t live without STM32CubeMX. You’d have to be crazy to live without it, how could you possibly figure out non-overlapping pin assignments.
I might be old school .. I really liked to have a programming manual while doing bare bones development (I did this using a Analog Devices device). The HAL didn't work out of the box as I didn't understand what else was needed for it to work. I had to call certain functions. In that case it would be less work to implement the same functionality using interrupt handlers which I wrote and that would give me better understanding of the platform as I would have to read the documentation.
I try not to use them, but they seem fine for basic SPI/UART/etc. Someone did an analysis of the dies, and apparently they compare favorably in some ways:
I haven't tried anything complicated with the peripherals though, and the thing is, STM32F103 chips are so cheap already. The design is over 10 years old, and ST is so good at making them that most "64KB" chips actually have 128KB of Flash.
The GD32V chips are very cool, though; they keep the STM32-alike peripherals, but they use a faster RISC-V CPU core (rv32imac).
If the author had compared an Cortex M0+ part [1] there are lots available for < $1.
For me, the ability to use command line gcc and editor on Linux to program and debug these things is the real win. I know the new kids all want a fancy IDE but I get from starting vim on a blank screen to running blink in about the same amount of time it takes eclipse to start up.
Back in the day (2002) I like the PIC16F628 and even designed a board for it[2] but pretty much now I can't stand to program them.
Experienced people usually want to use their preferred IDE. The IDE experience easily available for microcontrollers is usually 1-3 specific IDEs provided by the vendor - rarely the same thing.
What I want to know about a microcontroller is, (a) can I use Make/Gcc/Gdb to develop for it without a lot of guesswork, and (b) what is its interrupt latency? The article only hinted at (b) for a couple of them, and said practically nothing about (a). IDEs are a trap.
I recommend STM32CubeMX: this is a GUI tool that creates a gcc/ makefile project for any of their micros- so you only have to use it once to set up an example "hello world" project. No IDE is needed (older versions only generated code for IDEs, but no longer).
If you follow the comments in the generated source code you can keep your code separated from their code. The advantage is that you can reconfigure your project later- for example to change microcontrollers, or enable new peripherals.
Interrupt latency is always well covered in device docs, and pretty easy to find. And, honest question, isn’t it basically determined by the architecture and clock speed? So for instance, an ARM Cortex 0+ at 40 MHz, same in every family?
It is determined by the architecture, clock speed, memory architecture, time to wake from sleep, etc. So, some ISA, some other stuff. If you are familiar with the arch, you can guess pretty well, but in a big survey it seems like the point is you aren't.
I used it for research in a project month ago and found many of the chips it mentions are end of life / discontinued now. The prices of those chips (if available) are actually higher due to their shortages.
Realistically very few fresh project starts look at the cost of the controller (in fact most EE's are completely abstracted from it) and put more consideration toward E2E LTMC costs which includes things like firmware updates, wifi or network diagnostics, etc. I see a lot of companies going towards Linux-os and RaspberryPI-esque clones simply for the ease of development and debugging -- except when power, heat or space are important factors.
At this point the Amtel ATMega & STM32 chips pretty much dominate the hackerspace projects thanks to the libraries and strong ecosystems.
As near as I can tell all the other vendors are a rounding error in marketshare of new product starts.
Note: I'm excluding things like auto and appliances along with other entrenched industries who probably consume a ton of those $1 chips because they've already invested heavily in them 10+ years ago.
Came here to say the same thing. I agree, I don't think they were necessarily $1 in 2017, but it would be a very large omission today. It may be worth remaking an article like this today...I wouldn't mind helping someone out with that.
Dumb question, but how do you learn to work/program for these things? any reading material I can look at? I started learning C this week and this looks very interesting.
You can write ordinary C code to run on them. You just need a separate compiler/linker/etc. because they have a different CPU architecture than your computer. For an ARM Cortex-M chip, you can use the `arm-none-eabi` GNU toolchain:
But when you write a program for a microcontroller, you don't have an OS kernel, so your code needs to include drivers for anything that you want to communicate with.
It's a little outdated, but I like this introduction to STM8 chips (which this article also mentions) as a quick crash course on embedded C:
For learning and getting started, nothing beats the Arduino family. Huge and mature open ecosystem.
And the advantage that dirt cheap clone boards exist. There's huge starter kits in Aliexpress at $10-$20 range, with an arduino uno or mega2560 clone included.
Have a look at mBed - it's C++ rather than C, but it works with a wide range of boards / chips, many of which are cheap and has a solid range of libraries. I've used it mostly with STM32 boards. It uses the Arm compiler as standard, but can be switched to GCC
You can start with a cheap ESP8266 dev board, which I believe is LUA. I made a button that just posts to a backend when we are low on milk in the kitchen for the office at work but you can do some crazy stuff. im sure there are some great books out there but a lot of people just learn by messing around and looking at other peoples tutorials online.
I found that stupidly simple with many, many libraries to do things. WS2812 LED strips using FastLED is very popular and rightly so, great fun. Really quick and easy to start flashing lights in patterns. Then copy some wifi code so you can control it with your phone (or tablet or laptop). Mesh more than one. If you're just playing $3 is still very cheap.
And if you've just started learning C being arduino compatible with many step-by-step tutorials a search away is gold.
You need a compiler/IDE, a programmer (device that you slot the chip into to to upload the code), and the datasheet. Past that googling starter tutorials and reading the datasheet will take you far!
Historically one way people got into these was eg from learning how to program a Commodore 64 and other constrained hardware.
I doubt that's the most efficient route these days, but it can still be fun. And there's lots of material on the likes of the Commodore 64.
Oh, and of course there's always Google: 'but how do you learn to work/program for microcontrollers?' gets lots of hits, I bet some of them are even good.
I think the IDE's are highly overrated. Some may provide good product specific documentation, autocomplete, flashing and debugging. But most of the times they are highly opinionated programs with their own quirks and annoyances you need to work around to get your code running on a chip. I would be sad to have to turn down a good chip because it has a bad IDE.
I much more prefer good support for opensource framework's like platformio[0]. This way it hardly matters which specific chip I'm using, my workflow is mostly the same. And I can use the same editor, tools and methods I already use for developing all my other code without having to learn the quirks of new tools.
Not in this price range. The only parts from that family left in production are the ColdFire series, which are more expensive (starting at around $1.60@1k) and have unimpressive characteristics compared to other options (50 MHz with poor IPC, limited memory size, minimal peripherals). And even those are on their way out -- the product line hasn't been updated in ages, and NXP has shifted its focus to ARM.
Still, $1.60 isn't bad for a 68000. Some of the chips in the article are spins on the 8051, so the 68000 is a big step up from them. Now, true, you can get every peripheral imaginable in an 8051 spin, and often for that $1 price, but the 68000 has a lot more functionality as a CPU.
It frustrates me that the AVR architecture is so good and yet so poorly speced, I would really like to see something that had the RAM size and Clock speed of some 32 bit controllers. Sometimes when you want to just throw some bytes around, 8-bit is best.
It would have been fun if Logic Green had made a high end variant of the lgt8f328 were they stuffed it full of RAM (like 48k) It has more one clock instructions than the ATMega and 32MHz from an internal osc.
This article is great, but misses the mark by focusing on this arbitrary $1 price point. For a hobbyist, a dollar or ten is all the same. These very cheap chips are intended for high volume applications, far from the realm of DIY.
If you’re doing a hobby project, take something off the top of the line. Having all those peripherals and compute power is just one thing less to worry about.
True. But if you are a hobbyist, at the cusp of commercialization, then the MCU cost is a huge factor in total product cost.
As an example, I made a flow measurement and alarm system for my home. I used an ESP8266 development board, which is damn cheap, at about 5 USD. I successfully completed the project.
However, I decided to explore options for commercialization, and that 5 USD became a huge cost increase for my overall system. So I am now trying to designin systems with MCUs that are less than 30 cents.
It is true that ESP8266 chips are slightly more expensive than 30 cents, but I believe you were not planning to use the development board for mass production.
My comment was a reaction to the following statement. It is true that you should absolutely use the cheapest MCU ignoring the development cost if you are targeting to manufacture millions of those:
> and that 5 USD became a huge cost increase for my overall system.
> For a hobbyist, a dollar or ten is all the same.
Not really. If I manage to burn it out that $10 MCU just became a $20 MCU. And if I want it to run off a battery, extra peripherals and compute power I don't need don't help me any. And if I want to make more than one I want to to be cheap.
I burned out 3 NodeMCU boards in one day because I apparently know nothing about circuits (especially transistors) and kept effectively shorting a digital IO pin to ground.
Two of them appear to have blown voltage regulators. If I plug them into USB, the 3.3v pins give 0.25v on one board and 1.0v on the other. The third one appears to work fine, but I can't get it to accept flashes reliably. At least 90% of the time, it can't even open the COM port to flash (Says the device is busy). When it DOES actually flash, in the end it will often throw an error saying there was a hash mismatch, indicating the firmware was corrupted during the flashing process. I have had once or twice where it flashed with no errors, but my code appears to fail to run.
It's not like I'm gonna really notice the $13 I'm spending on Amazon to replace the boards (I know I can get them cheaper on AliExpress or some other Chinese site, but I wanted them in two days, not four weeks), but still I hate spending money I don't have to.
Eh, sometimes it's nice to have a go-to workhorse that you can throw into almost anything.
The STM32F103C8 is a good example. If you buy a few dozen off of AliExpress or TaoBao, "blue pill" boards containing them and a microUSB connector cost less than $3 each.
They only run at 72MHz, and they don't have floating-point hardware, but the price point means that you can give them away without a second thought. When someone has an idea for a Halloween decoration, or a garden sensor, or a proof-of-concept, it's usually my first choice because it costs less than a coffee.
Once it stops getting rained on, and corroded, and coated in dust, and submerged, and fried by a squirrel who thought wires were edible...then you can reach for the top of the line.
You can use an ESP8266 as a wifi router for up to 8 clients at 5mbps each I think which is really impressive so it should be able to do that... I used that for the kids in the car so that they can play games together
I recall that somebody found that the ESP8266 (no info about the ESP32) can talk directly to Ethernet devices, albeit it lacks a proper PHY with magnetics.
It could be handy to avoid using the same WiFi subsystem both to connect to the WAN and to accept connections from all clients, which I believe has a huge performance impact.
Now what if we wanted to overcome the speed and number of users limitations by connecting two or more ESP boards in a magnetic-less way, either crossing tx and rx pairs in case of two chips, or using a switch chip such as the RTL8306 to connect more than two ESPs?
I think it would be possible. More as an exercise though, since costs would raise to reach many already made products.
Like, for networking? None of these parts have any network capabilities -- they're microcontrollers, not application processors. Even if you added networking capabilities through external hardware, many of them wouldn't have enough memory for a network stack, let alone to handle any protocol on top of that. (Many of them have 4 KB of SRAM, or even less.)
if you don't intend to mess with hardware, just get any random Intel atom mini-pc, you can get them for way under $100.
these things will give you like 5KB/s in bandwidth, and that's after spending months figuring out how to get networking working on them. (I am assuming you have no embedded computing experience here)
In fact scratch that, none of these even have enough resources to handle TLS/SSL for one connection.
If you don't care about security telnet in the clear would be usable. (Because that's what we really really need: millions of embedded systems accessed by Telnet. EHLO!)
Edit: I forgot about inexpensive arduino nano boards. I'll leave my original incorrect comment below though.
Is this supposed to be geared towards hobbyist? If so, the $1 cost is sort of irrelevant if you have to spend $20 to $100 on programmers, development board, etc., when you can get a Pi Zero for & micro sd card for about $10 and be up & coding in an hour.
If you really want to overpay for what's basically a commodity these days, to support a project that was already wildly successful a decade ago and has outside growth several times the size of the original project... go ahead.
And if you use your desktop computer, you can be up and coding in a few seconds. But as with the Pi, you won't be working with bare metal microcontrollers which are the point of the article.
I've spent a lot of time dealing with crappy and broken vendor libraries, erratas, SDKs and toolkits where the vendor changes their entire strategy every year or so, and I'm tired. At this point I wrote (I thnk) three I2C libraries for various devices and systems (they're on github) — I should never have to do that!
These days I mostly use Nordic chips — not only does almost every device I design need BLE these days, but the chips are fairly nice, the SDK is developed, maintained and supported, and there is a refreshing feeling of sanity. Sure, a BMD-300 or BMD-350 module with an nRF52832 will be $10 instead of $1 or $2, but I will save so much time and frustration, that it's definitely worth it!
A vendor starting to officially support Rust might change my perspective, but to this day I haven't seen any other vendor take software as seriously as Nordic does.