There's a lot of hype around artificial intelligence, and it can be hard to sort the signal from the noise.

So why not talk to a real live human? Give us your info and we'll contact you right away.

Contact Us

computing at the edge

Pros and Cons of Fog Computing Cameras

As hardware advances and processing power increases, there’s growing interest in processing data “at the edge” rather than sending data back to a central point “in the cloud.” The interest is driven by a lot of different desires: data processing and storage reductions, performance, and greater autonomy.

Many people have begun to refer to this processing at the edge as “fog computing,” and in this article, we’ll dive into what that term really means, as well as what it means for AI vision systems.

computing at the edgeWhat Is a Fog Computing Camera?

To reiterate, fog computing is computation that’s not taking place on the cloud. Instead of:

  • Connecting a device to a network that’s then connected to the internet
  • Transmitting a stream of data to a central processing point somewhere online
  • Processing the data at a central point, and then transmitting results/instructions back to the origin point

You can do all your computations at the device level. If we’re speaking about an AI vision system, then camera footage can be processed in the camera itself. While it might seem like the processing power contained in an AI capable camera would be too limited, our DNNCam is proof that onboard processing power is more than ample.

DNNCam wet weather.The Pros and Cons of Fog Computing Cameras

So, what are the benefits of a fog computing camera, and what are the downsides?

Pro – Flexible Use Cases

A fog computing camera does not require a high-speed internet connection to function. They can be configured to store data for retrieval, or they can utilize cellular or sat phone modems to transmit processed data and results.

Pro – More Autonomy

When your AI vision system is dependent upon cloud data storage and processing, that has a negative impact on autonomy. That’s because transmitting and processing data remotely creates a delay between observation and action. While this delay might be just a few hundred milliseconds, that time lag can be limiting.

Con – Data Storage Limitations

Inevitably, fog computing cameras will run out of data storage capacity at some point. While devices like our own AI capable camera have 32GB of onboard storage (and a MicroSD expansion slot), it doesn’t take long to use that data up if your desire is to store raw 4K video footage for later processing.

Still, if only processed data is stored – or if only select footage is stored – storage might not be a problem.

Front view of DNNCamPro – Fewer Failure Modes

When an AI capable camera is “dumb,” all it can do is transmit the data that it sees. If there’s a break in the network and no data can be sent, no data is processed or stored. If the network is slow or struggling with capacity, the data that’s sent can be corrupted or incomplete. And, if the central processing point is offline, all the data that’s coming in likely will never be processed. Redundancy can help, but by their very nature, even redundant dumb AI capable cameras share the same failure risks (namely, their network connection dependency).

With a fog computing camera like our DNNCam, the risks of a failure that results in a total data loss are much lower. Not only can these cameras process and store data internally, but they can also be deployed redundantly. In one implementation, multiple DNNCams would be used to process the same visual data and then essentially “vote” before sending a signal. So, not only do we have redundancy in case of camera failure, but we have greater accuracy because multiple autonomous AI systems are reaching the same conclusions independently.

Con – Higher Hardware Costs Compared To Dumb AI Cameras

Obviously, a camera with enough onboard processing power to analyze 4K video footage is going to cost more than a camera that doesn’t have any processing power. While the cost of a fog computing AI camera isn’t extraordinary (most of the companies using our DNNCam, for example, were pleasantly surprised at our pricing), it’s not as cheap as a simple HD camera.

Pro – Lower Network, Storage, and Processing Costs

Most dumb AI cameras will use quite a bit of network capacity, cloud data storage, and cloud processing power. While these costs are rarely large, they add up over time. If, for example, a dumb camera uses $25 worth of cloud data storage and processing power each month it’s in operation, that’s $300 a year in costs above and beyond the cost of the camera and the network connection. It doesn’t take long for a fog computing camera to pay for itself in that scenario.

Summing Up

Fog computing is definitely on the upswing. While there will always be a need for cloud data storage and processing, it’s likely that we’ll see a hybrid of cloud and edge or fog systems in use in the future. There are a lot of benefits to fog computing, and the additional costs often aren’t significant.

If your company is looking for a fog computing camera for a specific application – or if you’d like to develop an application off of an existing fog computing camera system – be sure to contact us.

Darren Odom

The founder of Boulder AI, Darren’s engineering resume includes consulting for leading-edge research companies, product design and development, custom software application development, and more. Darren holds numerous patents, and is a recognized expert in the fields of deep learning and AI hardware development.