Blog for this project
Arduino Comparison chart
(All are Atmel)
8 bit AVR 16MHz
8 bit AVR 16MHz
8 bit AVR 16MHz
|SAM3X8E 32 bit
|I/Os and Voltage
||6 x 10 bits
||12 x 10 bits
||16 x 10 bits
||16 x 12 bits, 1MHz
||2 X 12 bit DACs
|Programming / debug interface
||ATMEGA16U4: Serial||ATMEGA16U4: Serial|
Full speed USB on chip
|One native USB
|Pin compatible versions
Mini (DIP) (no USB)
Pro (no connectors)
|Board List Price (Base model), USD
The answers to these questions tell me if the ADC is good enough
to do a certain job. Do I need to improve it? Do I need to
calibrate each system? Can I squeeze a bit more performance out of
it? Or do I need another processor or an external ADC to do the
Measuring real bits
An LSB is one count of the ADC. At 5V and 10 bits, an LSB is 5V / (2^10 -1) = about 5mV. In a perfect universe you would execute analogRead(A0) and it would return a nice stable value equal to Vin * 1023 / 5.0. But in this universe, you get a whole bunch of errors. The ADC has offset and gain errors, the reference for the ADC has errors and noise, the ADC has noise, and worst of all it has non-linearity. And the next ADC you measure, even of the same type, will have a set of different errors.
Depending on what your accuracy requirements are, you can either
improve or correct for some of these.
Minimum industry standards dictate that an ADC has no missing
codes. Meaning that as you slowly increase the voltage, all 2^N
output values codes will eventually show up. It's a pretty low
bar. But since marketing weasels are who writes the first page of
a data sheet, you need to dig deep into the numbers to get the
whole truth. Marketing says "12 Bits", but the engineers that
design and test these parts know the real truth, and usually
publish it somewhere on pages 2 through N.
Resolution and Accuracy
Accuracy is probably your ultimate goal. You want to know that when you apply 1.000 volts to your 5.0V ADC you'll get 1/5 of the full scale number. For a perfect ADC, you would get 1.00 * 1023 / 5.00 = 204.6, +/- 0.5. But to get that, you need offset and gain accuracy of less than 1/2 LSB, a reference accurate to better than 1/2 LSB (.05%), and an ADC with less than 1/2 LSB of INL, DNL and noise. Even with these near perfect 1/2LSB specs, these 6 error sources can and will all add up, and you can get errors approaching 6 * 1/2LSB = 3LSBs. Ouch. Then if any of these specs drift with temperature (Note* "Everything drifts with temperature." ) then when the temperature changes, things typically get worse. This is why an error budget is an important part of engineering.
* Erickson's Law of Temperature Drift
One technique to correct for some errors is to calibrate. Gain, offset, (and reference) errors can often be calibrated out with a simple linear correction. Higher order errors are generally impractical to compensate for. Remember the equation of a line from high school: Y=MX+B? If you know M, the gain correction, and B, the offset correction of your ADC, then you can correct for these. If you know M and B at various temperatures, you could even compensate for temperature drift but setting temperature and building a table of calibration vs. temperature is usually too expensive. INL, DNL and noise are harder to compensate for. INL requires that you know the errors at every possible input value and build a table of corrections. It can be done, but again, expensive (time consuming) and uses lots of memory. Few real systems go to this level of complexity. Engineers usually just buy a better ADC and reference. It is the rare ADC that has gain and offset error specs as low as1/2 LSB.
In the good-old-days, trim-pots were used to correct for offset
and gain errors and to provide calibration. But trim-pots require
manual labor to adjust and are prone to their own thermal and
mechanical drifts. Good trim-pots are fairly expensive. A 10 turn
trim-pot costs about the same as an Arduino processor chip, thank
you, Gordon Moore. When writable non-volatile memory such as
EEPROM became widely available, trim-pots for calibration became
rare. In the case of most Arduinos, calibration factors can
be stored in either EEPROM or Flash program memory. If you don't
mind re-calibrating every time you power up, store them in RAM.
That requires one or more precision sources that can be measured.
This is how modern digital oscilloscopes and waveform generators
So what are the real error numbers?
ADC Voltage Reference
Most Arduinos default to using VCC as the ADC reference. If the VCC is 5V, then it either comes from the USB connector or from the voltage regulator. USB 5V as a reference is pretty bad, since it comes from a PC with about 3-5% basic accuracy. Then there are voltage and ground drops across the cable, depending on the host, the length of the USB cable, the other devices plugged into USB, and the USB current load. Even turning on a handful of LEDs on your Arduino can vary the supply voltage. Forget about powering 1/2 Amp of relays, servos or motors.
But.... there is one case where using a crappy reference works.
That is the case where the sensor is ratiometric and is powered
from the reference. When the reference voltage drops, the ADC gain
increases. As long as the sensor output also drops, the reference
errors cancel out.
Examples of ratiometric sensors are thermistors and other resistive sensors with a pull-up resistor, and bridge sensors (pressure and strain).
Adding a more precise voltage reference to an Arduino isn't hard.
Pay a few dollars, and connect a 2 to 4.5V (3.3V max on the Due)
reference source to the reference pin, and set the Arduino to use
external reference with analogReference(type). Compared to VCC,
this will provide better accuracy, noise, and stability.
The Due and some 3.3V Arduinos can use the 3.3V power supply for
a reference. It can still be noisy and have a few percent of
error, but is better than USB +5V since it is locally regulated.
What is the difference between a reference and a linear voltage
regulator? The big difference is that a reference is always
specified for temperature drift but a linear regulator generally
is not. A linear regulator is basically a high power, not too
accurate reference. Good references are also better than 1%,
specified for noise, low power, etc.
Measuring ADC noise
ADC noise is one limit of ADC accuracy. All you need for a quick ADC measurement is an example program that measures the ADC outputs the data to a serial port. The Arduino Example "AnalogReadSerial" does this nicely. The steps are simple:
Here is the noise plot and measurements for a Due 12 bit (4096
count) ADC. I measure 6 counts p-p and 1.00 counts RMS noise. For
a Gaussian response, the ratio of p-p to RMS is about 5 to 1.
Notice int the test data that there is only one point at value
2277. Without that sample the p-p would be 5, not 6.
1.00 counts of ADC noise basically takes 1 bit away from the
specified ("Marketing") 12 bits, making it an 11 bit ADC. Look a
processor ADC specs and you will see that the ENOB
(equivalent Number of Bits) is INL (Integral Non-Linearity)
spec is +/- 1.2LSB max. over full temperature. (Equivalent number
of Bits) is 10 typ. 11 max. ENOB is a measure of AC (dynamic)
accuracy, not DC noise.
This is the 12 bit ADC on a Due board, with the default +3.3V
reference. It has -20.9mV of offset error and +1.29 % gain error.
To correct for this, add 20.9mV to your measurement and multiply
by 1 / 1.0129.
The gain and offset specs for the Due ADC are gain error of
typically 0.56% and offset of 11.5 codes (9.2mV). This part had
20.9mV or more than 2x the typical offset, and 12.% or more than
2X the gain error. Don't trust typical specs.