Disadvantages of the Cloud

As the world moves toward AWS, Azure, and Google Cloud, I would like to take a moment to reflect on areas where the cloud services maybe aren’t so strong.

Connectivity/Availability

While the popular cloud services are the best in the world at availability and reliability, if your application requires those qualities at a particular location, your service will only be as reliable as your Internet connection. For businesses, high-quality, redundant connections are readily available. For IoT-in-the-home, you’re relying on the typical home broadband connection and hardware. I won’t be connecting my door locks to that.

When disaster strikes, every layer of your application is a possible crippling fault line. This is especially the case when your application relies on services spread across the Internet. We saw an example of this recently when AWS experienced a hiccup that was felt around the world. The more tightly we integrate with services the world over, the more we rely on everything in the world maintaining stability.

Latency and Throughput Performance

If latency is important, locality is key. Physics can be difficult that way. Many attempts at cloud gaming have fallen apart because of this simple fact. Other applications can be similarly sensitive.

Likewise with bandwidth, even the best commercial connections are unlikely to achieve 10Gbps to the Internet. If your requirements stress the limitations of modern technology, the typical cloud services may not work for you.

Security

While the cloud providers themselves are at the cutting edge in security practices, the innumerable cloud services and IoT vendors built atop them often aren’t. Because of the ubiquity that the cloud model presents, it’s getting more difficult to find vendors that are willing to provide on-premise alternatives — alternatives that could be more secure by virtue of never shuffling your data through the Internet.

There’s also the simple fact that by using cloud services, you’re involving another party in the storage and manipulation of your data. The DoD types will certainly give this due consideration, but events like CelebGate (1 and 2) are indications that maybe more of us should. Every new online account we create is another door into our lives, and the user/password protected doors aren’t the most solid.

Another concern along these lines is legal access to your data. If you’ve shared data with a cloud provider, can it be legally disclosed without your knowledge via warrant? Can foreign governments request access to any data stored within their borders? These issues are still being worked out with somewhat varying results around the globe. This might be an even smaller concern to the typical user, especially for those of us in the US. However, I feel I would be remiss if I didn’t note that politics of late have gotten…interesting.

Precision Metrics

I haven’t heard about this problem recently, but it has been noted that it’s difficult to get good concrete metrics out of at least some of the large scale cloud providers. Software failures can occur for reasons that are invisible to the virtualization layer: poor node performance due to throttling of nodes in particular locations, odd behaviors manifesting between two VMs using resources in unexpected ways, etc. Good luck getting the hardware metrics needed to debug these from the cloud providers.

Cost

This is fast becoming a non-issue as prices race toward zero. However, at the time of writing, those AWS bills could still add up to a significant chunk of change. Delivering the kind of reliability that the cloud titans provide isn’t easy, and the cost can often be worth it.

Custom Hardware

This is a fairly extreme case, but customization with the big services is rough unless you’ve got significant cash to throw at the problem.

Scale

If you’re already the size of Amazon or larger (Facebook), you’re probably better off doing something that works for you. I don’t imagine many will be in this position.

 

There you have it. The best set of reasons I have for avoiding the cloud.

If none of those reasons apply to you, and high-availability, high-reliability, available-everywhere is what you need, nothing beats the cloud computing providers. They’re ubiquitous, generally easy to use and automate, and are getting cheaper every day. Sometimes you can even get better deals with specialized services like Bluehost or SquareSpace, but beware the drawbacks.

However, if you have a concern along any of the lines above, it’s at least worthwhile to take a second look.

Ubuntu on Asus Zenbook Flip UX360C – Power Performance

I spent a few weeks watching a Zenbook Flip drain its battery doing different things via Ubuntu 16.10, and here I present the results.

Charging while idling (screen on):

Trial NumberInitial ChargeFinal ChargeTime (hours)Total Charge Time (hours)
Average2.565
10.1201.0002.4092.737
20.0910.9902.0002.225
30.0410.9902.5342.671
40.0131.0002.5682.602

Admittedly, this has less to do with the OS than the hardware, but it was easy info to collect while running the other tests.

Discharging with video playing and full screen brightness:

Trail NumberInitial ChargeFinal ChargeTime (hours)Total Discharge Time (hours)
Average4.268
11.000.0294.0504.171
21.000.0284.2424.364

I was pulling video from YouTube via Firefox, specifically https://www.youtube.com/watch?v=SrlHtqTL_8o&list=PLU0V6ITiWqjhk5k-7hAasE5oAQspuTYgp. Gaming playlists make a good source of non-stop video.

Discharge passively with screen at 40%:

Trial NumberInitial ChargeFinal ChargeTime (hours)Total Discharge Time (hours)
Average9.733
10.9750.02910.38110.974
21.0000.02910.63010.947
31.0000.0297.0687.279

I’m not sure what happened with the last trial. Nothing should have changed between them. Regardless, I think the picture is clear enough.

I had started to do passive discharge times at full brightness, but later decided against it. The data I gathered is below:

TrialInitial ChargeFinal ChargeTime (hours)Total Discharge Time (hours)
11.0000.0307.4627.693

This looks similar to the odd trial from the previous table, so that might offer an explanation.

The last test I ran was battery rundown while sleeping. This test should be completely independent of Ubuntu, but it’s still useful.

Rundown time: 35 days from 100% to 13%

I only have the patience for one trial, but I’m more than satisfied with the result. Presumably it could go 40+ days before losing state.

It looks like Ubuntu 16.10 will give you anywhere between 4 to 10 hours depending on usage, where 10 hours is barely any usage. I’m inclined to believe Windows performs better, but I haven’t run the tests myself.

Benefits of Functional

In a couple of previous posts I’ve provided what I believe to be a good definition of a functional language (Functional Programming Definition) and why a functional model is ultimately required due to the constraints of reality (Everything is Functional).

However, I don’t know that I’ve been clear about exactly why functional programming is so beneficial. I think the benefits can be summed up in one word: predictability.

When I’m working with functional code, I know that I can lift any part from any where and get identical behavior given the same inputs wherever I may need it. When I know the language enforces these rules, I can be supremely confident in this quality of the code, even unfamiliar code.

When working in languages like C++, Java, and Python, I go to some lengths to try to maintain a functional quality to my code. It can be done with most languages. However, when veering into unknown code, my confidence in my ability to modify without odd unexpected side-effects drops dramatically, and it makes those changes much more of a slog. As bad a practice as it is generally recognized to be, I still bump into the occasional global mutable variable or static class member. It doesn’t take many of those to wreck your day.

I think a second closely-related benefit of the functional style is composability. When you have supreme confidence in your components, putting them together can be a snap. This benefit has a little more dependence on good architecture and design practices, but outright avoidance of the features that make code-reasoning difficult will certainly have an impact.

In reference to my earlier post, everything is functional if you’re willing to zoom out far enough. You could start up another process, container, VM, or cluster to restore your faith in the predictability of your code. Note that this approach has costs that could be fairly easily avoided by starting functionally from the start.

Disclaimer: it’s always possible to write bad code in any language or style. However, it can be argued that it’s much more difficult to reach the spaghetti-horizon where the functional style is enforced.