What’s cooking?

Walking into a customer’s lab recently I detected the tell-tale odor of insulation cooking.  One of the engineers I was with said “It always smells like that.  You get used to it.”  It shouldn’t.  Not a good sign.  High speed switches, fast network interfaces, and dense processing nodes all dissipate a lot of power as heat.  FPGAs are notoriously difficult when it comes to estimating power dissipation.  When heat is not adequately removed from a system we know the system will fail at an accelerated rate.  It is very important to consider and have a plan to manage thermal issues up front in any new design. 

Heat can be transferred via three mechanisms: conduction, convection, and radiation.  Conduction occurs in solids and the heat is transferred via motion of the molecules.  Convection is the transfer of heat via a gas or fluid so by definition it is not possible in a solid.  Radiation is the transfer of heat via electromagnetic waves.  The sun warms us via radiation.  In a packaged chip we get conduction from the die to the surface of the package and then convection from the package surface to the surrounding air.  The amount of heat transfer from the case to the air via convection is dependent on airflow.  It is important to note that, for both conduction and convection, temperature is directly proportional to the power dissipation and inversely proportional to area the heat is flowing through. 

In the picture below I’ve depicted a BGA package– the balls are the little circles on the bottom, a chip (clear rectangle) inside a package (crosshatched area) with a heat sink stuck on the top.  A model of the thermal impedance is drawn to the right. 

 

Simply put, thermodynamics tells us that:

                        Rja = Rsa + Rcs+ Rjc = (Tj-Ta)/Q

 Where Tj is the junction temperature,  and Ta is the ambient temperature.  Both are in degrees Celsius.  Q is power dissipation in Watts. Rja is the thermal impedance from chip junction to ambient air.  Rsa is impedance from the surface of the chip to ambient air, and Rjc is impedance from the junction to the chip. 

Typically, for telecom type systems, the max Ta is 50 degrees C plus a 10 degree C rise inside the box.  A typical semiconductor Tj is 115 degrees C.  Substituting in these values allows us to calculate the max Rja for different levels of power dissipation.   You can see that as the thermal impedance Rja increases the allowable power dissipation drops quickly, assuming constant Tj and Ta.

I recommend measuring case temperatures right on the chip package and then using the chip manufacturer’s specified Θjc ( I use the notation Rjc here) to calculate the junction temperature.  A good DMM with a temp probe can give reasonably accurate case temperatures in a pinch.  There are also thermal cameras and handheld infrared heat detectors with laser sighting for those with a bigger budget. 

As an example, a typical  Rjc for a BGA package is 0.18 C/Watt.  Lets look at the case temperature and see what we can conclude about the junction temperature inside.  We know that:

Rjc = (Tj-Tcase)/Q

0.18 = (Tj –Tcase)/Q

Tj = Tcase + 0.18Q

You can see that the junction temperature will track the case temperature closely (within a degree) for power dissipation of 5W and under. Above 5W it gets difficult to get the heat out of this package so the die temp is gonna rise and the failure rate will increase.   Working through, and managing these issues up front in a design cycle will save you from a lot of headaches and a smelly lab. 

The Road to 5G is Paved with Good Intentions

One morning couple of weeks ago I was sitting in my office watching the snow drift down and wondering when spring was going to arrive in New England.  Mr Coffee, as usual, had provided a fresh pot of hazelnut coffee and I had a bag of cinnamon donuts from a shop that still makes them by hand.  The thing about donuts is they put a solid foundation under your whole morning.  Breakfast of champions.  The phone rang, disturbing my deep thoughts.  The caller wanted to know if I would be willing to participate in a private discussion of 5G applications with a group at an upcoming conference.

One of the problems with talking in general about “5G applications” is that the phrase itself means different things to different people– and they all can be quite passionate and vocal about their favorite application.

From the telco perspective there are two markets for 5G services:  fixed wireless broadband and mobile 5G services.  Pretty simple.  The telcos have been actively engaged in acquiring spectrum and launching trials for the past year or so.  It’s pretty obvious where they are going– at least in the near term.  But many advocates, developers, and entrepreneurs see things differently.  They see at least 4 application areas:

  1. Fixed wireless broadband
  2. 5G cellular service as a faster extension of 4G service
  3. Internet of Things (IoT) network enabler with low energy, low latency, mobility, ubiquitous connectivity, security, and billions of nodes.
  4. Wireless network unification– replace WiFi, and a myriad of other short range wireless networks with one standard network.

I’m not one who believes spectrum is scarce, but it is highly regulated.  Spectrum allocation suffers from the same rent-seeking problems seen when government and corporations collide (or collude).   Applications 3 and 4 may require expansion into parts of the RF band (above 30 Ghz) that present big challenges.  The wavelength for these frequencies is in the range of 1 mm to 10 mm so it is called the millimeter wave band. The main problem is signals encounter much more attenuation at these wavelengths.  Thus they can’t travel as far as today’s cell signals do and they don’t penetrate walls well.   Since we can’t change the main culprits in our environment, walls, water vapor, and oxygen, we have to space the radio transmitters and receivers closer together and/or use beamforming technology to focus the signal only where it is needed.  Beamforming is spatial filtering– focus the signal energy where you want it like a flashlight does.  You focus your signal on the target rather than spread it all around.  Beamforming works great for fixed entities– like an antenna array in a factory talking to a machine bolted to the floor. Beamforming is much harder for anything moving rapidly– like a car.

The telcos are looking for ways to extend their cellular franchise and perhaps reduce cost in last mile connectivity.  Given the network architectures proposed though it remains to be seen if there is a compelling cost reduction in widespread fixed wireless broadband.  They may just stay with fiber/coax to the home. They may decide that fixed wireless broadband, given its much higher speed, is a good replacement for DSL though.

I see 5G cellular and WiFi as more complementary than one replacing the other.  WiFi Alliance says there are more than 30 billion WiFi capable devices in the world today. WiFi hot spots are ubiquitous.   Each technology continues to evolve to higher bandwidths.  The big chip and equipment providers like Cisco, Qualcomm, and Broadcomm are betting on both.   I don’t see application 4 as likely to happen.  Hmmm, but what about application 3?

The IOT and Big Data

We have had a remote monitoring application for some time now called BBD Control.  Typically we monitor temperature and various sensor voltages.  Time-stamping and logging measurements to a file are sufficient for many applications but the types and amount of data that can be collected today is remarkable.  To me, this is the true promise of the Internet of Things (IOT).  It is not about 5G connectivity so much as it is about getting accurate data in people’s hands.  I’ve seen sophisticated post-processing and analysis lead to great insight based on a simple log file of accurate data.  The promise of Big Data all starts with the data.   

As we got into more demanding applications we developed a web based interface that controls measurements and displays results in tabular format so you can check on things remotely using a web browser. Along the way we added an SMTP email capability to inform users of measurement results and alarms.  This turned out to be quite handy.  It is nice to be able to show someone an event happening in their system miles away while having a nice lunch at the Farmer’s Daughter ( a local favorite). 

Recently we got involved in a project to add AC power measurements to BBD Control. We measure and analyze power draw.  This entailed adding a small microcontroller based circuit board and fair amount of software to control and display the measurements.  It measures AC voltage, current, and frequency.  Then it calculates real, apparent, and reactive power. 

Here is a screenshot I grabbed of the power monitor in action. It is quite useful for applications like measuring power draw for a custom block chain processing engine running different software algorithms. We scroll the 10 most recent measurements in a table on the power monitoring page.  Measurements are initiated by clicking on the measure power button or they can be made automatically at fixed intervals.

Principles of Decoupling Networks Part 3

In this post I will discuss some guidelines for location and number of decoupling caps for a generic PDN.

How close should the capacitors in a decoupling network be to the power pins they serve? One way to answer that is to think about it in terms of how quickly a decoupling capacitor can respond to a change in voltage on the power plane. When an IC switches internally it demands current from the power plane. The power plane can’t provide the current instantaneously so the voltage starts to drop at the power pin. This drop needs to propagate to the nearest decoupling cap before the cap can start to respond and bring the voltage back up. That response then must travel back to the power pin of the IC. There are two round trip delays before the power pin gets relief. A cap placed too far away from the power pin can’t respond in time to the transient event. As the cap is placed closer to the power pin the energy transmitted to the power pin increases until it hits 100% at 0 distance. If the cap is more than a quarter wavelength away from the power pin it essentially provides no relief. We can achieve efficient energy transfer at some fraction of 1/4 λ from the power pin. For example, one tenth of quarter λ is a good target ( placement radius = λ /40).

We know that a capacitor’s resonant frequency is given by the following.

Fres = 1/2 π√LC  

The period of that resonant frequency is:

Tres = 1/Fres  

Physics tells us that a wave’s speed is the wavelength divided by the wave’s frequency.  Or alternatively:

λ = Tres/Vprop

Where Vprop is the propagation delay (sec/inch) of a board with a given dielectric.  For microstrip FR-4 connections this is about 140 ps/inch.

If we use a .001 uF MLCC cap in an 0402 package with a mounted inductance of 2.0 nH as our high frequency decoupling we can calculate λ = 63.5 inches.  Using our desired placement radius from above of  λ/40 tells us that we need to place this cap within 1.59 inches of the power pin.   That’s not so bad! We don’t need to stuff them all on the back of the board under the IC’s footprint unless we want to.  It is desirable to keep them close to the power/gnd plane sandwich to minimize inductance.  This could be on either side of the board. 

Larger value capacitors have correspondingly lower resonant frequencies and a higher placement radius.  A 1 uF capacitor, for example, can go just about anywhere on the board and still be effective. The following table shows capacitance versus placement radius. 

CapacitanceResonant FrequencyPlacement Radius
1 nF112.5 Mhz1.59 inches
10 nF35.6 Mhz5.02 inches
100 nF11.3 Mhz15.8 inches
1 uF3.6 Mhz50.1 inches
10 uF1.1 Mhz158.67 inches
100 uF356 Khz501.77 inches
1000uF112.5 Khz1586.74 inches

So how many decoupling capacitors do we really need? As noted earlier we want to use a broad mixture of capacitor values to approximate our desired flat PDN impedance. I have successfully used a formula that roughly doubles the quantity of capacitors for every decade decrease in value. The following table illustrates the scheme.

CapacitanceQuantity PercentageExample Capacitor Type
470 to 1000 uF 4%Tantalum
1.0 to 4.7 uF 14%X7R 0805
0.1 to 0.47 uF 27%X7R 0603
0.01 to 0.047 uF 55%X7R 0402

So, for example, if I have an IC with 10 active power pins I would use six 0.01 uF caps placed within an inch of the pins close to the power plane, three 0.1 uF caps and one 1.0 uF cap nearby. In a corner of the board I would place a 470uF Tantalum cap. If I added another of the same type of IC to the design I would add five more .01 uF caps, two 0.1 uF caps, and one 1.0 uF cap. I would share the 470uF Tantalum cap between multiple ICs.

Basic Principles of Power Decoupling Networks Part 2

Decoupling is the technique used to reduce switching noise in the Power Distribution Network.  CMOS transistors draw current when they are switching.  This sudden current draw leads to a voltage drop or ripple on the power plane of the circuit board.  The ideal PDN provides a low impedance path to minimize the voltage ripple on the power plane.

The diagram below shows an ideal low impedance PDN along with the frequency bands where various parts of the PDN respond.  At low frequencies the power supply is the dominant current supplier.  Between the KHz and MHz ranges low frequency or bulk decoupling caps provide the energy needed.  Between the MHz and Ghz bands high frequency decoupling takes over and finally the capacitance of the board’s power planes is most effective at very high frequencies.

Ideal PDN Impedance

Ideal PDN Impedance

A real capacitor is usually modeled as a series R-L-C circuit. When you plot its impedance versus log frequency you get a picture similar to that shown below.

Real Capacitor Impedance

The left side of the V is due to the capacitive reactance and the right side of the V is due to parasitic inductance.  The notch of the V is the  resonance point where the impedance is given by the capacitor’s equivalent series resistance.  Since we know that

V = I / 2πFC

and capacitors in parallel add we can decrease ripple voltage by adding capacitors in parallel to the PDN. A different value of capacitance will have the V centered at a different frequency.  If you compare the above V shaped response to the ideal PDN impedance we are trying to achieve you can see a problem though.  How do we turn the V shaped response into the flat response we want? If we put capacitors of various values in parallel we can take a step in the right direction.  The idea is to use a bunch of V’s at different frequencies to approximate a U-shaped frequency response.  This works well as long as you watch out for a phenomenon called anti-resonance.  Anti-resonance arises from the right side of the V shape  for one capacitor over-lapping the left side of the V shape for another capacitor.  Where they sum you get a lump of increased impedance.  Anti-resonance can be managed by using low inductance capacitors and many different values of capacitors.

Our first rule is then:  put numbers of different value capacitors in parallel and minimize the inductance.  We need to minimize inductance because we know that

V = 2πFL x I

Which tells us that to minimize ripple voltage at high frequencies reducing the inductance is more effective than increasing the capacitance.

The inductance of a capacitor mounted on a circuit board includes the capacitor’s parasitic inductance and additional inductance from the loop the current traverses from the power plane to the cap and back to the ground plane as shown below.    So how you mount your decoupling capacitors matters as well.

Capacitor Loop Current

Capacitor Loop Current

What are some low inductance ways to mount a capacitor?

pads_viasThe above diagram shows various combinations of pads, trace, and vias.  The ones towards the right side of the diagram minimize inductance.  Short fat trace minimizes inductance.  For example, I prefer 10 mil trace for power in many cases.  Larger diameter vias have less inductance.  The shorter the via the less inductance also.  In some designs putting the vias inside the component’s pads is an option.  This can lead to problems assembling a circuit board though– in-pad vias can act like little straws and suck up the solder paste–  so I didn’t include it here.

So far we know that we need a variety of  capacitors with different capacitance values.  We need capacitors with low parasitic inductance and they must have low inductance connections to the power plane.  There is one other element to consider– the placement of the capacitors.  I’ll talk about that in my next post.

Basic Principles of Power Decoupling Networks Part 1

I was sitting in my office with a fresh cup of coffee and a bag of donuts contemplating which donut to eat first.  As I pondered the merits of a jelly filled beauty covered in sprinkles I noticed the sprinkles looked like tiny surface mount decoupling capacitors on a circuit board.   I have seen high speed digital designs with no decoupling capacitors.  The designer swore it would work because he had sandwiched the respective power and ground planes in such a clever fashion.  It didn’t.   And I have seen people put so many caps on a board they looked like a Bedazzler project.  That one worked but the layout person threatened homicide.

As I ate my donut I thought about all the myths and superstitions surrounding power decoupling I have heard over the years.  It seems to me that a designer’s approach to power decoupling can tell you a lot about their personality.  I participate in a lot of design reviews.  The people in the reviews are usually much more interesting than the designs. Whether they are bent on proving to the world how clever they are or just obsessive about rules they may not understand fully often shows up in how they approach keeping the power rails clean.

A Power Distribution Network (PDN) consists of the power supply, power/ground planes, power traces, and decoupling capacitors.  The purpose of a decoupling capacitor is to provide clean power to the devices on a circuit board.  Power consumed by digital devices varies over time.  Most chip manufacturers specify the cleanliness of that power. For example, +/-5% is a common specification for a power pin. This fixes the maximum amount of noise or “ripple voltage” that can ride on power supply traces.  The ripple voltage comes from current switching within the devices on the board.  Low frequency ripple usually comes from chips being enabled or disabled.  This type of ripple can happen on a time scale from milliseconds to days.  I once found one that happened every 3 hours or so and resulted in a bus protocol violation that powered down the board.  High frequency ripple comes from current switching within a device and the time scale is related either to the clock period or a higher harmonic.

A full-blown detailed PDN design encompasses SPICE modelling and analysis followed by prototype measurements using a scope or network analyzer.  Verifying the accuracy of the models used is essential.  This work can take quite a bit of time and money and is really only needed in extreme cases.  Most of the time a less time consuming approach is acceptable.   This approach uses a good decoupling strategy guided by experience that allows you to quickly produce a working prototype you can then experiment with to reduce cost and make the boss smile.

Engineering always involves trade-offs between time, risk, and money.  Engineers who last in the business always leave some wiggle room for unexpected error.  As I said to the engineer mentioned above who designed a board with no decoupling: “What happens if you don’t have a decoupling network and you are wrong?”  It doesn’t take much time or effort to put in a basic decoupling network if you understand the principles.  And you can easily experiment with removing capacitors or changing values once you have the prototype.  He proceeded to cost his employer a lot of money by producing an unreliable prototype with no capacitors and hard to debug noise problems, and then had to re-layout the board.  A good strategy for PDN design in most cases does not have to be overly time consuming to implement.  A few guidelines based on experience can speed things up.

So what are some guidelines that lead to happy results?  That’s where we are going.  To set the stage for that though I need to delve a little deeper into what the decoupling network is really doing.   Next Post.

Rapid Development with HW Building Blocks: Raspberry Pi SoM

In recent years there has been a lot of innovation in the System on Module (SoM) market.  The choices are plentiful for processor sub-systems integrated with memory and peripherals and packaged on a small module.   A Google search for “SoM, system on module” returns about 648,000 results!  Scanning through the results reveals a range of options  from 8 bit micro controllers to 32 bit processors capable of running Linux.  In this post I will discuss one of the more innovative developments in the SoM marketplace– the Raspberry Pi.

rpi2

Raspberry Pi2

Raspberry Pi:  The Darling of the Maker Movement

Somewhere between full Open Source Hardware and traditional Commercial-Off-The-Shelf processing modules sits the Raspberry Pi.  Although some of the design documentation is freely available from the Raspberry Pi foundation Broadcom holds a lot of the key intellectual property because their highly integrated processing chip is the heart of the design.  This credit-card size, low power (~5W), low cost ($35) processing engine has proven to be a real game changer for the Maker community and it is earning serious consideration for several types of embedded systems.

The original Raspberry Pi is based on the Broadcom BCM2835 SoC, which includes a single ARMv6 processing core running at 700 Mhz along with a Graphics Processing Unit (GPU) capable of MPEG-2 and VC-1 encode/decode. The Raspberry Pi B+ announced in February 2015 has a quad core ARMv7 processor, a GPU,  and more RAM.

DRAM on the Pi ranges from 256 Mbytes on original systems up to 1 Gbyte available on the Raspberry Pi2. Note that the GPU shares DRAM memory with the processor.  High definition video decode can eat up significant amounts of shared memory so careful analysis is necessary here.  Nonvolatile memory for boot, OS, and all persistent storage is via Secure Digital (SD) or micro SD sockets depending on model. This makes it easy to upgrade nonvolatile memory capacity but presents problems for security sensitive embedded systems.  SD cards also pose problems for embedded systems that need to survive shock and vibration events.

The PI has a rich mix of peripheral interfaces for many applications: USB, 10/100 Mbps Ethernet, GPIO, I2C, SPI, and an analog audio output. It has a 15 pin CSI connector to allow direct attachment of a camera module and it has both HDMI and composite video outputs.  We have used it for applications as diverse as testing Analog to Digital converters to video monitoring to building cheap storage devices.

Originally intended as a platform for kids to learn about computers the Pi is now receiving interest as an embedded system component.  Responding to this interest, the Raspberry Pi Foundation created a new module in SODIMM format called the Raspberry Pi Compute Module.  The compute module uses the same Broadcom SoC chip as the original Raspberry Pi so it has the single ARMv6 processing core along  with a GPU.

rpcmodule

Raspberry Pi Compute Module

Note that the Compute Module does not use a removable SD card, which makes it more suitable for vibration and security sensitive applications.   The Raspberry Pi Compute Module is an interesting option for some embedded systems.  I am not a big fan of SODIMMs though if a system has to survive shock and vibration.

Operating Systems

The Raspberry Pi runs a variety of Linux based operating systems. Arguably the most popular is Raspbian, which is based on a Debian port called Wheezy. There are many others. The Raspberry Pi Community has created an install manager called NOOBS that can load your OS of choice from the SD card on initial boot up.  Note that the original Raspberry Pi cannot run Windows or Ubuntu. The new Raspberry Pi 2 is rumored to be able to run Windows 10 and an Ubuntu distribution called Snappy Core.

The Raspberry Pi has a huge following of devoted users and developers providing support and there are many related products available. You can run a wide variety of open source software on the Pi.  The Pi offers decent performance, multimedia capability, and a mix of useful interfaces at a low price point.   And because of its dedicated community of developers it is very easy to use.  In fact, it is so cheap and easy to use it can be a great tool to speed up embedded system development.  We have several in our lab and keep finding more ways to use them.  That’s a topic for another post I guess!

Rapid Development with HW Building Blocks: Reference Designs

Second in a series…..

In the first post in this series we looked at using System on Chip technology to save time in developing an embedded system.  In this posting we look at the use of hardware reference designs.

Another resource that can save development  time is a reference design that uses components you want to use in your system.  These reference designs are typically provided by a semiconductor vendor but were designed by a third party.   A good reference design will at least include a working board, schematics, and physical design files that allow you to build your own hardware based on a working design.  This can be a real time saver.  Occasionally the source schematics are included and if you use the same schematic capture tool you can save a lot of time that would have been spent creating symbols and drawing schematics.  Most often though the schematics are in PDF format.

Sometimes you can get low level firmware, software drivers, codecs, or even higher level software included in a reference design.  A good reference design board will allow software developers to get going with a lot of their low level software and firmware before actual hardware development.  A working reference design board can also allow experimentation and verification of things like a critical task’s real time performance capability, boot up and reset issues, and power and thermal issues. There are tests you can run in seconds on a lab bench that are not practical in a hardware simulator.

There are important issues to consider when using a reference design.  A thorough design review of the reference design is necessary to avoid problems later on. The  assumptions for acceptable design practice made by the developer of the reference design  may not be the same as yours.  Find out this stuff up front, not via a call from an angry customer.  Also, carefully check any licensing issues or terms and conditions for copying a reference design.  Hardware licenses can be different than software licenses because they are based on  patent law not copyright law.  It’s important to know the difference.   A patent based license can control the use and manufacturing of your device based on their design documentation.   A copyright based license just controls the distribution of the design documents.  Having a lawyer review the license up front is always a good idea.

Rapid Development with HW Building Blocks: System on Chip

First in a series of posts……

When discussing rapid development of complex embedded systems the issue of using third party hardware and software building blocks inevitably comes up.  There are a lot of options and issues when it comes to using embedded system building blocks.  Nobody wants to re-invent the wheel but picking the wrong wheel can overturn your chariot!  In this series of posts we will look at some options and issues for speeding up hardware development using different building blocks.  In a later series we will look at some of the software issues.

Semiconductor vendors offer an amazing range of System-On-Chip (SoC) devices.  Choosing the right SoC can really shorten development time if most of what you need is already in the silicon.  A critical area that is sometimes missed though is an analysis of how your system’s desired performance running your application compares to the SoC’s likely performance running your application in your system.  What parts of an SoC you use and how your traffic flows affects performance.  How your system is physically partitioned and how you interface the SoC to your system can also have dramatic effects on performance.  The thermal environment in your system can impact how much performance you can squeeze out of an SoC.

If you develop an FPGA based SoC for your system you can use some pretty powerful low-cost vendor provided design tools and no charge (nothing is totally free my friend!)  FPGA vendor cores.  These cores can be a big time saver in chip design.   The FPGA vendor has  knowledge of its cores, tools, and silicon so they can help with development problems at a deeper level than a third party core vendor.  Also, your interests and the FPGA vendor’s interests are aligned.  They want you to get to market as fast as possible so they can make money too.  Be aware FPGA vendor core licenses often specify you can only use them in the vendor’s chips.   This can present problems if you want to port your design later.  It’s best to study the license details and be up-front with the FPGA vendor on what you plan to do.

In our next post we will look at the use of hardware reference designs.

Raspberry Pi based NAS

The Wall Street Journal recently ran a story called “Building Your Own Cloud” by Joanna Stern.  It’s a fun, well written piece.  I would link to it but it is behind their pay-wall unfortunately.  It talks about purchasing commercial Network Attached Storage (NAS) servers called Personal Cloud systems and using them on your home network.  The author reviews offerings from Western Digital and Seagate and discusses her experience with these off-the-shelf solutions.  The systems she reviewed run about $300.

Joanna’s story reminded me of a fun project we did one afternoon at Black Brook Design — building our own NAS for the lab.  We had some Raspberry Pi single board computers lying around along with a couple of portable USB hard drives.  The question was how hard would it be to make our own redundant NAS.  After rooting around in the lab for some parts and doing a bit of searching on the Internet it turns out it is fairly simple and cheap to build a redundant NAS device for yourself.  We found some excellent instructions online here:

Raspberry PI NAS Instructions

We found two things we particularly liked about the above instructions. First,  the use of NTFS formatted disks so we can always unplug one of the USB hard drives from the Linux based Raspberry Pi and plug it into a Windows computer.  This was very handy in initially transferring files to our NAS.  Second,  we have to share files between both Linux and Windows machines so Samba was a good choice for our network share software.

We decided to build 1 TB of redundant storage into our NAS.  We used two Seagate Backup Plus Slim 1TB portable hard drives ($64 each), a powered USB hub ($8 – $12),  and a Raspberry Pi ($35) running the Raspbian operating system. We recommend using a powered USB hub to connect the USB hard drives to the Raspberry PI as the drives drew a bit too much current for our Pi to source via its USB ports.   So we built a 1 TB redundant NAS device for about $175 in a couple of hours.  We attached our NAS to our network using a Fast Ethernet connection on a nearby Ethernet switch.