All posts by Josh

Minds: @JoshFuhs Twitter: @JoshFuhs

General Sorting is Θ(k × n)

This article makes extensive use of Big-O notation.

n is the number of elements to be sorted
k is the key size in bits; the smallest k for n uniquely identified elements is log(n)

I was taught in school (back in the day) that general sorting, usually assumed to be comparison sorting, is O(n × log(n)).

An alternative and well-known form of sorting, radix sort, which places elements relative to the value of the key rather than swapping after comparisons, is O(k × n). Since the smallest-case k for a set of uniquely identified elements is log(n) bits, O(n × log(n)) for radix sorting is still accurate for a smallest-case key size.

However, in comparison sorting analysis the key size is often ignored and the comparison time is assumed constant. Thus the fastest comparison sorts are O(n × log(n)) comparisons. If the same strategy were applied to the analysis of radix sort, its time complexity would be O(n) — k would be a constant factor.

Given that most general-purpose comparison sort algorithms accept arbitrary objects with arbitrary comparators, comparison time is certainly not constant. If arbitrary key size is considered, sort time for comparison sorts would be O(k × n × log(n)) or, for the smallest-case k, O(n × log²(n)).

Radix sort doesn’t readily handle arbitrary data-types — typically used for sorting of fixed-size integers. The argument is that since radix sort isn’t general, its performance doesn’t translate to general sort performance.

What is general sorting? Any data structure that can be represented in computer memory can also be represented as a string, or a loosely-structured, arbitrarily-long sequence of bytes. So string sorting is general sorting.

There is a string sorting algorithm, Burstsort, that efficiently sorts strings and is O(k × n), where k is the average key size. As mentioned above, strings can represent arbitrary data structures, so Burstsort is general for sorting. A general implementation of Burstsort would take a set of objects and a serializing function as input rather than a comparator. The serialization of the objects to be sorted may not be particularly fast, but it is O(k × n).

Therefore, general sorting is O(k × n). Since that’s also the minimum time required to at least look at the keys to be sorted, general sorting is also Θ(k × n).

I’m not sure if this is old news to everyone, but it only just hit me.

Getting Started With OpenStack

I’ve spent the last few weeks playing with OpenStack. Here are some of my notes. I should add that this is almost certainly not the best quick-start guide to OpenStack online. If you know of a better one, please leave it in the comments.

OpenStack is an enormous project composed of numerous core and more ancillary pieces that has been around for a few years. Because the project is so big with so many active players, there have been volumes written about it and narrowing search results to the specific topic you’re interested in takes some effort. There’s also a lot of old documentation that’s no longer relevant that turns up in searches for fixes to problems. Sometimes, a lack of information is more useful. It’s daunting to get started to say the least.

The use-cases I care about:

  • Single-user on a small cluster.
  • Small business (multi-user) on a small cluster.

I write a lot of software, most of it libraries and user tools. I want an expandable test environment where I can spin up arbitrary VMs in arbitrary operating systems to verify that those different tools work. As far as I can tell, OpenStack is the only free option for this. I also run some services for which Docker would probably suffice. However, Docker isn’t OS universal. If I have an OpenStack cluster available, I know I have my bases covered.

All code for this investigation has been posted to Github: https://github.com/joshfuhs/foostack-1

DevStack

DevStack is an all-in-one OpenStack installer. Which makes life a lot more pleasant.

As far as I know, all of the below works with Ubuntu 16.04 and 18.04.

Use instructions here to set up a dedicated OpenStack node:

https://docs.openstack.org/devstack/latest/

This page works for getting to a running instance:

Note: it’s important to do the above on the demo project with the private network. The public network on the admin project doesn’t have a DHCP service running, so CirrOS won’t connect to the network.

To get SSH working, assign a floating IP to the new instance via:

Network -> Floating IPs, hit “Associate IP to Project”

This is a host administrative task. It allows real network resources to be allocated to an otherwise virtual infrastructure. Then the Project administrator can associate the IP to a particular instance.

Console

Under Compute -> Instances, each entry has a corresponding console on the right.

You can also look up the console URL via (note, must establish admin credentials first):

source devstack/openrc admin admin
openstack console url show <Instance-Name>

DevStack is not intended to be a production installation but rather a means for quickly setting up OpenStack in a dev or test environment:
https://wiki.openstack.org/wiki/DevStack

As such DevStack doesn’t concern itself too much with restarting after reboot, which I’ve found to be broken on both Ubuntu 16.04 and 18.04.

The following script seems to bring the cinder service back into working order, but since this is not a use-case verified by the DevStack project, I worry that other aspects of node recovery are broken in more subtle ways.

Bash-formatted script:

# Reference: https://bugs.launchpad.net/devstack/+bug/1595836
sudo losetup -f /opt/stack/data/stack-volumes-lvmdriver-1-backing-file
sudo losetup -f /opt/stack/data/stack-volumes-default-backing-file
# This also seems to be a necessary step.
sudo service devstack@c-vol restart

Also see: https://github.com/joshfuhs/foostack-1/blob/master/code/systemd/foostack%40prep-c-vol.service

Note: there’s a lot of old commentary online about using rejoin-stack.sh to recover after reboot, which no longer exists in DevStack — one example of the difficulty of finding good, current info on this project.

Server run-state recovery didn’t work well out of the box. All running servers would come back SHUTOFF when the single-node setup restarted. I looked through many options and documentation, but couldn’t find the right settings that would bring the servers back. Ultimately I created a new systemd service that would record and shutdown all active servers on shutdown, and restart on startup.

Server run-state recovery doesn’t work well out-of-the-box. All running servers would come back SHUTOFF after a single-node restart. Getting them running again has proven to be tricky. There are some mentions of nova config switches that might help, but I’ve not yet seen them work. To make things more complicated, immediately after reboot nova thinks those instances are still running. After some time the nova database state catches up with what’s actually happening at the hypervisor level.

In an attempt to get this working, I’ve created a pair of scripts and a corresponding service to record and shutdown running instances on system shutdown and restart them on system startup (see: https://github.com/joshfuhs/foostack-1/tree/master/code/systemd). However, this doesn’t always bring the instances completely back up, and when it doesn’t those instances seem to be unrecoverable until another reboot. Regardless, this feels like a complete hack that should not be needed given what the OpenStack system does.

I’m giving up on server run-state recovery for now. The success rate is better than the fail rate, but the failure rate is too high for any real use. In a multi-node environment, I suspect the need to completely shut down VMs will be rare since they could be migrated to another live node. I’ll revisit this when I get to multi-node.

Yet another problem with rebooting a DevStack node is that the network bridge (br-ex) for the default public OpenStack network doesn’t get restored, cutting off access to instances that should be externally accessible.

This is resolved pretty easily with an additional network configuration blurb in /etc/network or /etc/netplan (Ubuntu 17.10+). See: https://github.com/joshfuhs/foostack-1/tree/master/code/conf

Other OpenStack Installs

Alternative instructions for installing on Ubuntu can be found here: https://docs.openstack.org/newton/install-guide-ubuntu/

These are lengthy, and I’ve yet to find anything in-between DevStack and this. This might be a weakness in OpenStack. It’s built by large IT departments for large IT departments. DevStack is great in that it lets me play with things very quickly. If there’s an opinionated installer that is also production-worthy, I would like to know about it.

https://wiki.openstack.org/wiki/Packstack – Use Redhat/CentOS and Puppet to install OpenStack components. Unfortunately I’m kinda stubborn about using Ubuntu, and the introduction of Puppet doesn’t really sound like a simplifying factor. I could be wrong.

Python API

<old_thoughts>

The best way to describe the Python API is “fragmentary”. Each service — keystone/auth, nova/compute, cinder/storage, neutron/network, glance/storage — has it’s own python library and a slightly different way to authenticate. On Ubuntu 18.04, the standard python-keystoneauth1, python-novaclient, python-cinderclient, python-neutronclient packages do things just differently enough to make interacting with each a whole new experience. The failure messages tend not to be very helpful in diagnosing what’s going wrong.

The online help is very specific to the use case where you can directly reach the subnet of your OpenStack system. In that case, the API endpoints published by default by a DevStack install work fine. In testing, however, I tend to sequester the system behind NAT (predictable address and ports) or worse, SSH tunneling (less predictable ports), making the published API endpoints useless (and in the latter case, not easily reconfigurable).

Each API-based authentication method has a different way of handling this:

  • cinder – bypass_url
  • nova – endpoint_override
  • neutron – endpoint_url

and in some (all?) cases, explicitly specifying the endpoint doesn’t work with standard keystoneauth Sessions

</old_thoughts>

At least, that’s what I thought until I dug through the Session API. At the time of writing, DevStack doesn’t supply Keystone API v2 (it’s deprecated), and python-neutronclient can’t work directly with Keystone API v3. That prompted me to dig quite a bit to make the Session API work. Once I discovered that the ‘endpoint_override’ parameter of the Session API was how to explicitly adjust URLs for communication, it made consistent authentication much easier. I didn’t find any tutorial or help that would explain this. It just took some digging through code.

There are still some API-style differences between the libraries, most notably neutronclient, but the Session API cleans up most of the initial complications.

I’ll post API-exercising code in the near future.

Current Status

I knew going into this that OpenStack was a complicated project with a lot of moving parts. I had expected this effort to take a couple months of free time to figure out how to (1) setup a single-node system, (2) develop some automation scripts, and (3) setup a multi-node system. Three months in and I’ve got (1) partially working and a rudimentary version of (2).

After all of this headache, I can definitely see the appeal of AWS, Azure, and Rackspace.

I have a little more stack scripting work that I want to do. Once I’ve seen that all of that can be done satisfactorily, I’ll look into setting up a multi-node cluster. Stay tuned.

How I Got Here

When talking with people about what I do, it’s obvious that it’s difficult to explain quickly in a way that they understand any of it. I’ve written a bit about that here: Agile vs. Formal Management Systems. What’s even less obvious in a casual conversation is how one could even get to doing what I do, and whether it would be worthwhile. This is the kind of thing that will have to be updated periodically because I’m definitely not standing still. I’ll leave this strictly a description of education and professional development. That’s certainly not the end-all — it’s important to be able to manage what you build and earn, I talk about that here: Phases of Investing. Motivation is also important, but that’s an enormous topic.

In my early years I was a console and computer gamer. I still would be if it didn’t take so much time and produce so little. Gaming was much more a solo activity back then as Internet gaming wasn’t as feasible given available technology. We did figure out how to play Doom multi-player with a modem connection and later we got a couple of computers linked up to play Starcraft and other titles.

I wanted to make computer games. My friends and I would bounce around various ideas for games and talk about making them. We didn’t really have an idea where to start, but I knew programming was involved. I started playing with MS-DOS batch file scripting and made a few small useful things.

In high school I was lucky enough to have programming classes available to me. I jumped on them. There we learned the (even then) relatively obsolete QBASIC and got an introduction to C++. I think high school programming classes are common-place now, and I would encourage anyone to take advantage of them.

At age 16, I started to work at a local retailer. This isn’t the first job that I had, but it is the first one that I took seriously. I would have been happy as a cashier or pushing carts, but they wanted me at the service desk. I didn’t like it, but I did it. In retrospect, it really helped me to figure out how to interact with people better — especially people who were starting off upset. I probably owe a lot more to that experience than I typically give it credit.

I went to university and studied Computer Science and Mathematics. I wasn’t sure what I was getting into when I started, but it turned out that I liked the Computer Science side a lot.

Throughout my school years I continued poking at programming projects — mostly with the goal of making games. I picked up PHP and MySQL to make a small, ultimately unused, web site. At the time, there were several massively multiplayer HTML games that I thought I could take a swing at.

This small web site was key in helping me land my first programming job during my sophomore year. It demonstrated some ability that a resume would never communicate alone. I ended up helping a small company build and manage their web applications. Looking back, it was pretty poor work, but it did help them. That job beat into me the value of testing, revision control, and clean code in ways that the classroom had not yet been able to emphasize. There were very lax controls around the deployment of code, and I hadn’t developed nearly the required level of discipline for checking my work. It bit us badly. The complete embarrassment of bringing the productivity of an entire office to zero is something that most kids don’t get anymore. It changed me. Before long I had a script for verification that was better than anything they ever had. Other hiccups encouraged me to bring revision control (CVS) to the office.

Realizing the valuable skills I’d learned in just a few short months on a programming job, my view of classwork dimmed. I got through my classes with little enthusiasm, but they were definitely good for me.

I took a generalist approach in school. I tried a little of everything. Before I left undergrad I had seen the basics of operating systems, compilers, security, databases, networking, graphics, software engineering, algorithm analysis, and digital logic. The advice seemed to be to specialize, but that didn’t appeal much to me. Why specialize if you don’t know why you’re specializing? I think I’ve since used aspects from most of those classes. It was worth it.

In between classes I figured out how to use Linux. I started with Slack 8 (not recommended), then tried Mandrake, and eventually settled on Debian. With that I set up a small home network made up of old computers that I had collected over the years. That taught me a lot about networking and some sysadmin basics. Today it’s easier to get started with Linux. The graphical user interface (GUI) on those older versions wasn’t as developed as it is today, so I stuck with Windows for my day-to-day. Today I’m almost exclusively using Ubuntu.

My first job out of school was programming small embedded Intel 8088 and ARM7 processors. They were plugged into custom boards designed by the company. My first project was to work with the hardware engineers to design a new board using a relatively “new” interface called USB. Embedded development was new to me, but the new processor and board had so much more CPU capability and memory available, that it rarely gave me a problem. Virtually every significant piece of code I wrote for the board had an accompanying program to test it. I was almost doing unit tests without knowing it. We also created a standard hardware testing harness and accompanying software to verify that all the parts of the board were communicating as expected.

In the 3 years I worked at this job, I grew from a competitive, passive-aggressive, pain-in-the-ass know-it-all to a much more mellow, more humble pain-in-the-ass. Partially that was due to doing meaningful work over a longer period of time. That’s a difficult experience to find in school. I also credit the fact that I was working with a wide diversity of experience. Most of my co-workers were at least a decade older than me, and several had started working at that company before I was born. They had a long view of their work that had rarely occurred to me, but I slowly adopted it. That’s also a difficult experience to find in school. There was also a wide variety of professional backgrounds: mechanical engineers, electrical engineers, embedded engineers, UI engineers, chemists, biologists, and veterinary experts. The spread of domain expertise was so varied that I regularly had to rely on other people to get my job done. Some level of humility was a requirement. That’s also a difficult experience to find in school.

In 2007, another opportunity came along, and I jumped at it. This time I would be working on some compiler-like tools. Specifically, we were taking programs apart and reassembling them with some modifications. Testing was crucial to this company, and they had the best product validation practices I had seen to-date.

During this time, I found a renewed appreciation for classwork. I could see that my classroom training in the more abstract topics like algorithm analysis was an asset that some others were lacking. It also hit me that the abstract concepts taught in school will never announce themselves as necessary. They just appear as a lot of work that the knowledgeable know how to do faster and better. It reminds me of the people who claim they’ve never needed some kind of math. Strictly speaking, you don’t need math, but you’re probably doing a lot of extra work without it. Throughout these years I took advantage of computer science and math classes at the nearby university. I have not regretted it.

The next group I worked with made a distributed system that coordinated data across hundreds to tens-of-thousands of sites. Their development practices needed some help, and I was more than happy to provide it. Their historical decisions throughout the product’s history had made it a very support-heavy organization. The support team was well-staffed and very busy. Tier-three support was a phone that the engineers traded amongst themselves for some extra pay, and it was a nearly guaranteed angry, late night phone call.

We wanted to get ahead of the support calls. I and a couple of other engineers led an effort of infrastructure and testing improvements. Product deliveries went from hand-assembled to automatically generated. The database went from a mish-mash of direct table edits to a set of scripts that were guaranteed idempotent through arbitrary upgrade paths. Unit testing was added at every level. Interface testing was introduced every time one was touched. The UI also got a lot of testing attention.

The customers went from regular failure on installation (not to mention downstream problems) to zero installation failures in about the space of a year. It was remarkable. At one point I noticed someone with the tier-three support phone, which I had been avoiding. I asked him how it was. His answer: “It never rings, so it’s free money.” That was a moment of silent celebration. We had spent a lot of time to make things better for the customers, but I hadn’t realized that we had almost completely transformed the despised support phone into a paperweight.

Once the raging fires were squelched down to contained bursts, I put some effort into learning how to do large-scale communication correctly and started collaborating with local grad students on what the right design would look like.

Unfortunately, circumstances worked out such that the parent company saw our office as unnecessary and closed it — perhaps we engineered around the urgent workloads a little too well.

I ended up working on more build tools afterward. I would alternate between being a feature developer and team lead. Then the company signed a deal requiring compliance to a formal quality management system, and my name came up as someone to assist with the deployment. At the time I thought it would be a good way to drive the company and products toward better practices through measurement and analysis, so I jumped at it. The job ended up being a lot of paperwork and document management, but the spark of a data-driven effort remained, and I drove on. One of the greatest benefits of this position was that it gave me the clearest picture of company organization and management that I’ve ever had, and I find it’s very relatable to the concepts learned in computer science.

In retrospect, my career path seems meandering. I went after the work and positions that seemed right at the time, but there was no single theme to the motions except “Make Things Better.” Perhaps a more deliberate drive would have turned out better in some ways, but I really have no complaints.

Side Projects

In between jobs, I’ve never been not working on a side project — most of them never published. Sometimes I would find an open source project that was making my life easier in some way and start making the changes that I wanted. Other times I just worked on things I was interested in at the time. Sometimes it supported my job, sometimes not. Professional work is often about negotiating the trade-off between doing things well and doing them fast — with a lot of business pressure on fast. On the side I would often explore ways to do things well, and it usually worked out that the lessons learned could be applied to the job.

Working from Home

If you’re able to get work done without anyone looking over your shoulder, you’ll have no problem finding a company willing to let you work wherever you want. I’ve heard that remote work actually makes people more effective in some cases, but I’m not so sure. The corporate world seems to go back and forth on this. As nice an idea as it sounds, there are some complications.

The biggest one: isolation. I didn’t realize how much I got out of the daily office chatter until it was absent. There is no happy hour with the co-workers, and office events are much harder if not impossible to fit into the schedule. You’ll frequently find that you’re not aware of things that are happening in the company. Some regularly scheduled chat times might go a long way.

To make things worse, if you work and are a family man, you may find that 80 hours of your week are spoken for, so there’s little time to develop any kind of professional relationships in your area. I would recommend you find some relevant groups to interact with and make a point of doing it. I’ve found that there are valuable business relationships to be found among the regulars at coffee shops and breakfast joints.

Another problem: infrastructure. Residential power and communications aren’t as reliable as commercial, I’ve found. I’m usually dealing with 2 multi-hour power outages per year. It’s manageable, but annoying. Communications outages are similar. The answer is: redundancy, but it’s extra headache that you typically wouldn’t have to deal with in an office.

Things I Might Do Differently If Starting Over

I realize that things have changed a bit since I went through school, and I think there might now be better ways to do it. For someone starting today, I would recommend Python as a starting language. It’s relatively simple and there’s a lot of online help for getting started.

I might also suggest a hard look at the online learning resources. There is an advantage to having a group of peers trying to learn the same thing you are: it brings out the useful questions. However, college tuition is getting expensive, the colleges are usually not where you are, and communications technologies are
constantly improving. If you can find a small group of locals interested in the same online course and maybe someone more knowledgeable to guide the group, you’re set.

If I were starting again today, I would put more focus into freelance work sooner. Freelance gives you the freedom to look for interesting work in a way much less restrained than an employer would typically allow. Full-time employment provides income security (but be cautious). Full-time employment is great when you don’t know what you’re doing, but after a while it can feel constraining. If you can get someone to pay you to do exactly what you want to do, there’s nothing better.

I would publish more hobbies. You may not think they’re valuable or clean enough to present to the world, but others may disagree. If no one can ever see it, you’ll never know what something is worth to the world. I’m improving this as I write.

Big vs. Small

In my decade plus of experience in the working world, I’ve managed to work for large and small companies. I’ve noticed how different
organizations seem to operate.

Some groups are small, lean, and mean. They don’t have much to work with, and everything they do is extremely focused on their
product(s). These companies are intense and, I think, fun to work for, but they can’t survive many hits. As a result, if they manage to hang on, they probably have a very capable team that can work wonders.

Then there are the larger organizations that don’t seem quite so
focused on what they deliver. Once you’ve spent a few hours in
meetings trying to optimize your software build processes to take full advantage of the various international tax benefits, you may realize that a lot of work performed by people all over is done to get around obstacles that we set up for each other. You also quickly realize that there is nothing more tedious, and less worthwhile on the whole — especially to technically-minded people.

I have never seen that at a small company. Small companies don’t have the resources to worry about that kind of thing or the scale to make it pay off. They focus on the product or service that they’re delivering and not much else.

However, all of the larger organizations that I’ve worked for have been much more profitable, and I wonder why that is. There’s a simple explanation — the bigger companies do more valuable work. Extremely valuable work would allow a company to grow faster than others and support the larger overhead of organizing so many more people, processes, etc. Given what I’ve seen at large companies, I’m not so sure this is the only, or best, answer.

The value of work or a product is dictated by reality. How much time does the product save? What are the side-effects of using said product? What are the risks? The net consequences of all concerns indicate the intrinsic value (regardless if that’s reflected in market value).

We set up rules to guide us toward maximum value. Distinguishing good rules from bad rules isn’t always easy. Bad rules can masquerade as good rules for a time, and good rules can go unnoticed. As a result, this discovery is a meandering process, but ideally the rules guide us toward maximum value.

Effective rules might include the US Constitution. They’ve produced a fairly stable country that has managed to be very productive. English Common Law might be another example. It’s a pattern of behavior that has positively influenced every place that it has touched.

For reasons stated above, the rules regularly fall short of optimal. When they do, they force the people and organizations that are required to follow them to trade off real value creation for compliance to the rules to minimize penalties. As the gulf between the enforced rules and the optimal rules widens, those people and organizations are forced into a particularly difficult spot.

On one side, reality can’t be ignored indefinitely. People need to breath, drink, eat, sleep, etc. If those needs can’t be met, people will get sick and die — at an arbitrarily large scale. Accumulated wealth can act as a buffer against bad decision making, but it won’t last forever.

On the other side, the man-made rules can be ignored to the extent that the enforcing organizations can be avoided or manipulated. Many times man-made rules are also created in such a way that they don’t apply to the smallest of organizations.

When caught between these two sides, the combination of acting against reality, not complying with bad rules, or both, will start killing companies. The ones that seem to survive are either so small that the rules don’t apply or they can avoid detection by enforcement bodies; so large that they can take advantage of relatively expensive techniques to comply with or avoid the rules; or so large they can directly influence the rules or their enforcement.

Given this, in an environment where the rules are good, one might expect to see companies of all sizes flourishing or at least surviving to some extent. In an environment where the rules aren’t as good, I would expect to see some fraction of middle-sized companies disappear. The smallest companies can escape the rule or detection, the largest companies can financially or politically maneuver, the middle guys take the hit.

How do the large companies have an advantage? New fixed costs are a relatively smaller proportion of their revenues and (presumably) profits. This includes complying with regulations, legal battles, lobbying, and moving offices or plants. New taxes are maybe generally less of a burden to the large companies, but the accounting research needed to minimize tax burden via international accounting also acts as an alternative fixed cost, yielding the above advantages.

These all might be described as political economies of scale. The advantage isn’t in discovery or creation, but in being able to maneuver the political landscape. I believe this is almost exclusively the domain of the large organizations.

Phases of Investing

Here are the phases to investing that I’ve developed over the years.

Many authors have covered personal finance. Many more will follow. My approach may be more holistic than some. My goal is to find the best way to spend my time, money, and other resources. Note that I’m not licensed anywhere to give this kind of advice, so take this for what it is: rambling thoughts.

As a baseline, let’s consider: If you make $10k/year and put 10% into the general stock market, you would have $120k after 30 years. That’s enough to reliably draw $400/mo for decades. Double that if you made $20k/year. Double it again if you did that for 40 years. That’s the power of investing and interest over time. (Quick aside: Social Security takes more and doesn’t yield as much)

Returns on investment can take many forms:

  • Learning
  • Contacts and relationships
  • Money

Ideally you should arrange your life so that you’re always developing all three. Primarily you do this through work. However, there will be times that you will have to venture out on your own.

General Guide to Income Allocation

  • At least 10% to debt payments and investments (more if 10% isn’t enough to reduce your debt).
  • At least 10% to tithe. You may believe this to be an irrelevant religious tradition, but it’s important that you actively seek out projects that you think are worth developing or investing in — even with little or no expectation of return. The returns could be measured in knowledge, connections, good will, karma. Paying taxes isn’t the same. It is important to do this intentionally. It is important to do this willingly. It prepares you for the later phases.
  • At most 10% on entertainment. Some costs are required: housing, food, clothing, etc. Some costs save you time and are ultimately worthwhile. Some costs are extravagances. Identify and limit the extravagances. However, it’s important that this not be zero. A life without any entertainment is hard to live. Some people can do it. I can’t.

That leaves roughly 70% for the daily necessities. If your expenses
are less than that, you can increase the other buckets, but keep
entertainment at or below investments.

Investing Phases

1. Falling into the abyss – live within your means

If your debt is growing you need to get a handle on this fast, especially if you’re accumulating high-interest credit card or other unbacked debt.

If you have cable, drop it. If you’re eating out a lot, it’s time for ramen and PB&J. If you have toys you don’t need, sell them. In this financial state, you can’t afford ego or luxury. Get a better job. Get a second job. Reaching cash-flow neutral is absolutely paramount.

In this phase I would limit tithing and charity. You may be the biggest charity case that you know of. One wrong move, and you could be on the street. A great way to help others is to make sure they won’t need to help you.

You’ll be in this phase until your debt is no longer growing. In other words, your income is bigger than your expenses — preferably 10% bigger, but we’ll take what we can get.

2. Living on the edge – build the emergency fund

Your finances have stabilized, but there’s not much room for
error. Job loss, injury, or illness could put you in a very rough
spot. It’s time to build a buffer.

Open a bank account or some other stable fund and start saving.

You’ll be in this phase until you have enough in your low-risk account to cover 3 months of expenses. Make sure that you keep up with this as your expenses change in the future.

3. Drowning in debt – purge the leeches

Now that you have some breathing room we can focus more on long term finances.

In this phase, you have a large amount of unbacked debt. Credit card debt is some of the worst, but any debt above 7% interest is really bad.

Pay down the highest interest rates first and work your way down.

You will be in this phase at least until you have no interest rates greater than 7%. There is little point to investing anywhere else until this is accomplished. Your debts would eat your gains faster than any accrual aside from blind luck.

4. Work – develop your skills

Now that you’re out of serious debt your investments will have some punch. Now we can start building long-term investments. Open a brokerage account (Schwab has the lowest fees today), IRA, 401k or some combination thereof. Invest broadly in the stock market via ETFs or other funds — and then promptly forget about them. Your time spent working is worth far more than any effort to manage your meager investments.

Now is the perfect time to learn things. Learn how to do the job you’re doing. Learn how to do the job you want. Learn about finance, learn about business, learn about science. All of these things are worthy of your time.

If you’re in this phase and you’re young enough, you’re in really good shape. Time favors those who invest. If you’re not young, you may just expect to work longer. It obviously gets more difficult as you get older, but it’s usually something you can do. At the least you can be proud that you’ve significantly reduced the burden you would be placing on others if your life takes a wrong turn.

If you want to progress further, you’ll be in this phase until your total portfolio value is roughly 2x your annual expenses.

5. Long-bets – learn to manage wealth, carefully

You’ve been working and investing for a while and now you’ve built up a little wealth. Now we should learn to manage it. The experts may disagree at this point. The buy-and-hold strategy is definitely a strong one, but the world doesn’t run on people who sit around waiting for good things to happen. Just as in the last phase we developed our work skills and discipline and did something, now we will do the same with our wealth.

Pick one to five specific things to invest in and put no more than 7% of your total portfolio value into those. Then choose a target return of 2x-10x and hold on. Expect that you’ll be with these for at least 5-10 years. If they go under, try to understand what went wrong. If they hit your return target, sell them down to no more than 7% of total portfolio value and keep going… or choose something else.

I recommend choosing projects that not only look like they could eventually yield a return but that you also agree with in philosophy and principle. That way a complete loss could still be a justifiable attempt to make something worthwhile. Preferably, You would have domain knowledge around your investment. That way you have some way to expertly evaluate what you’re getting involved in. This is a great time to start a business, or support a friend in his.

You will be in this phase until your total portfolio value is around 25x your annual expenses. With any luck, good choices and due diligence will hasten your trip to the next phase.

6. Director – move mountains

Congratulations, your expenses are covered by 4% of your portfolio. You are financially free. This is effectively a retirement point. Many retirement advisors would recommend having a larger portfolio, but you’re in good shape regardless. Keep your expenses in check, manage your projects well, and you can take on enormous challenges.

At some point, subtracting long-bets portion of your portfolio still leaves you with 25x your expenses. Everything beyond 25x can be used for whatever you want. Open a restaurant, a comedy club, or a SpaceX.

Notes

The lines between these phases are fuzzy, but that’s roughly the outline. The limits to the phases are suggestions that can be moved one way or another depending on your willingness to take the risk.

Note that these phases are entirely dependent on your expenses.  The key to moving through the phases is to control those expenses. There will be points in life where your expenses will increase for good reasons. Children are a good example. Roll with it. There may be other times that your income will drop. In those times, you will be incredibly grateful that you only depend day-to-day on at most 70%. Do your best to bring it back down.

Why the numbers?

7% – This shows up a lot, and that’s because it’s roughly the long-term whole-market return rate once you’ve accounted for inflation. Historical inflation is ~3.3% and historical whole-market returns are ~10.5%. This means that the numbers in your account will appear to go up by roughly 10.5% every year (the swings can be large, so keep your hat on), while the buying power of each dollar will fall by about 3.3%. So if you’re withdrawing and spending more than 7% of your portfolio, even if the numbers appear to be getting larger, your ability to do things with your portfolio is shrinking.

This means that debt charging over 7% interest is worse than
neutral. You’re always going to be better paying off the debt
than investing those dollars.

2x-10x (long bets) – The point of these long bets is to let them amplify your portfolio growth. One of the worst things you can do is drop out of an investment early because it made you a few dollars. It makes the management of those investments expensive, and you’ll probably miss out on the biggest gains. You already have your base portfolio.

Keep in mind that at 10.5% per year, your cash amount will double roughly every 7 years. If your big bet takes longer to double, it’s under-performing.

25x expenses (long bets) – This is essentially a retirement threshold. Your expenses would be 4% of your portfolio. A general market portfolio will return ~7%, roughly 20% of that will go to taxes on withdrawal. That gives you 5.5% to live off of. If 4% covers your expenses, then you need to draw about 5% of your portfolio to cover it. Obviously the more you have, the better, but you’re in good shape either way.

Other Thoughts

Get married. Find a good partner and settle down. Yes, your night life will slow down — that’s the point. How to do this well is a topic for another day (and probably another author). Just like you can do more when you don’t have to hunt for food all day, you’re more effective when you aren’t frequently looking for companionship.

Casinos are a no go. Everything about them is designed to draw you in and take your money. Markets exist to turn wealth, ideas, and work into more wealth.

Control your expenses. Sometimes you can spend money to save time. Those are beneficial. There are many others that are just expenses. Drawing the line isn’t always easy, but think about it. Establish a rule for separating them and stick to it.

Never stop learning. Learn via your workplace, take classes on the side, or do a side-project that teaches you something. Outsource the things you know how to do. Do the things you don’t until you’re comfortable with them. Of course, you can’t do everything, so choose judiciously.

Start a business. It’s cheap and easy to fill out the paperwork. You don’t necessarily need an accountant or lawyer, but they can help you through the basics. Try to get revenue for your hobbies or try out ideas. When you have a real business idea, you’ll be more familiar with the nuts and bolts.

Government-supported accounts, like IRAs and 401k, have some tax benefits, but they make your money unavailable for the bigger projects you might want to tackle: entrepreneurship and large investments. Sometimes they’re worth it, like when employers offer matching cash. Keep in mind that government programs are subject to change without much warning and in ways that might not be legal if done by a business. For example, the Social Security program (not an investment account) publishes future payout numbers that aren’t going to be possible given impending solvency problems. It’s Bernie Madoff-level reporting that would mean a class-action lawsuit for a corporation.

While average returns from general stock investments are around 10%, their value can fluctuate wildly from year to year. If you can’t be invested in the general stock market for a period of 10-20 years, you may want to plan for a smaller return and hope for the best.

Extreme risk management. In really outside circumstances, you may want to hedge against your currency value crashing. It doesn’t happen often, but when it does, you may be in a world of hurt. Cash alternatives like other currencies, gold, or bitcoin could be useful in such a situation. In addition, if they end up performing well as assets, you can sell them down and put the proceeds in your portfolio (enormous gains for cryptocurrencies lately). There are a lot more considerations for holding and, more importantly, using backup cash-equivalents in emergencies which I won’t go over. See a survivalist channel for more details. Because of the extreme outside risk of this, I would keep these alternatives at less than 1% of portfolio value. They shouldn’t be expected to perform well.

It is cheaper to live now than at any point in the past. Doubt that? Check out humanprogress.org. People tend to balloon their lifestyles as their income grows, sometimes to the complete exclusion of their future. The rest of the world is more than happy to come up with ways to talk them out of their money, but that doesn’t need to be the case.

Creating with Quality

I read a book not too long ago, Lila by Robert Pirsig, where the author describes a system for organizing information and performing work. In it he describes two parts, a change agent and a lock-in mechanism. The change agent could be anything — a person or any kind of random interaction — anything that can bump the system into a new state based on a set of rules. The lock-in mechanism is the way that changes get stored and checked for usefulness. He likens this to a ratchet where a little work can be done, checked, and stored in a state where you can leave and return to make more progress at a later time.

This applies to virtually all creation. The universe is a soup of particles being tossed about operating under a strict set of rules:

  • Assuming for discussion that quantum is the base
  • Quantum wave/particle interaction yields a stable atomic system
  • Atomic interactions yield a chemical system
  • Chemical interactions yield a protein system
  • Protein interactions yield a DNA system
  • DNA manipulations yield codified social systems

I may have skipped some steps, but I think it can be seen that each layer rests on the foundations of the previous layer. Each of these systems are subject to changes from various sources. Each of these layers has a mechanism for storing and/or replicating those changes to be acted on at a later time. DNA is an incredibly rich system, but it’s nothing compared to the level at which we are/will-be operating intellectually.

Every factory comes from a blueprint and list of processes for creation. Stores and shops facilitate resource distribution, and offices are home to countless business-value processes. These are all improving regularly, sometimes like clockwork with a predictable pace.

If you have a system that allows you to make changes and check them against all the expectations of the system, you can very quickly deliver new features with confidence.

This model easily applies to software development. In fact, we explicitly structure projects in this manner. The engineers and designers are the change agents for obvious reasons. The revision control, build, and test systems are the lock-in mechanism. The software power-shops have their lock-in time frame down to hours, if that. They can make changes and push them to customers and users extremely rapidly. The trick to making this work well is a high fidelity ratchet — your tests.

Completeness is important in your testing. Missing and delivering problems can be like slipping a tooth in your ratchet. If enough teeth are slipping, you’ll find it gets very difficult to advance as the expectations of your system grow.

Frequently you’ll find that you need to improve the ratchet itself. Less work per stored change (architecture) or less verification time (testing overhead) will bring the ratchet cycle time down, ultimately making you more productive — and by much more than the reduced overhead. When an idle 15 minutes could turn into a product 15 minutes, the frequent little boosts add up.

I personally prefer to be able to do something useful and verifiable within 30-60 minutes. If that’s not easily doable, I tend to put time into my ratchet.

The small-cycle ratcheting technique can apply to other types of creative work as well. Model simulation and rapid prototyping techniques are quickly turning relatively complex manufacturing into an everyman’s game. I think we’ll be better off for it.

Configurable Mobile Devices

I think it’s time for a configurable mobile device platform. Like ATX of the previous generation, this could serve as the foundation upon which many fun projects are born.

As the industry fights for thinnest and lightest, they’ve shattered many form and function boundaries. Form and function are separating. Function no longer requires the volume, mass, energy, or cooling of past technology. As this separation progresses, we have a lot more leeway on form, but we seem strictly focused on sleekness. A standardized mobile form might cost some volume and weight, but could still be well within the state-of-the-art dimensions of a few years earlier.

I write this on an Asus UX305FA, an impossible device a mere decade ago. The thin-and-light notebook market has exploded with models and options, all of which have a slightly different mix of features, none of which seem to match any particular person’s needs. Most probably aren’t terribly successful products.

The die-hard DIY crowd has put together projects involving clusters of Raspberry Pis and other groups of small computing units like Intel’s NUC. People are making this work, awkwardly, without any particular standard. The motivation for creative exploration is there, but the industry isn’t facilitating as well as it used to. This slows creativity.

Project Ara by Motorola and then Google looks like a good swing at this idea, but they may be thinking too small — smart phones might be the wrong target. The one place where space is at a premium is in your pocket. However, tablets, notebooks, and even high-performance systems could benefit from a fresh look at smaller standardized form factors.

Of course, with this idea comes all the traditional problems of customizing systems of “standard” parts. Components could be slightly wrong on the spec, parts could conflict in unspecified ways, you’re responsible for whatever monstrosity you manage to cobble together yourself. However, the opportunity for creative experimentation is enormous. Companies like Dell, HP, and Compaq grew up supporting their particular grouping of standardized components. There is still a healthy market of PC customizers and modders. One need look no further than YouTube channels like LinusTechTips to see the enthusiasm, both on their part and the part of their subscribers.

With a mobile component interface standard, component manufacturers would have more freedom to experiment on their own in their domain. Phone, tablet, notebook, and stationary case manufacturers could experiment with all kinds of forms what wouldn’t necessarily survive the design process of a mass produced device. The same is true of the components themselves.

This certainly wouldn’t make the large manufacturers like Apple, Asus, and Toshiba irrelevant. Someone still needs to push the leading edge of technology. It might even make their products better. Their current innovations seem to be more miss than hit. A customization community might provide an idea pool from which they could refine the next major features.

If there are other groups pushing in this direction, I would love to hear about them.

One-Hour Blogging

There is one rule: write a blog post in under an hour.

I often tell people that they should blog. If they’re an expert on a topic, write about that. If they’re learning about a topic, write about first impressions and relationships to other things that they understand. People love a good “rise to competence” story. Maybe you provide something valuable to someone. Maybe you demonstrate competence to potential employers or partners. Maybe you make contact with interest groups that you didn’t realize existed. Maybe all that comes of it is improved writing skills.

Unfortunately, I’m not good at taking my advice.

I came up with the idea of the one-hour blog to get more thoughts recorded and published. I mull over all kinds of topics and write a lot, but when it comes to publishing I rarely feel like the topic is fully covered, the writing clear enough, or points are accurate enough.

The idea first struck me a few months ago when I thought that time-bounding my blogging might force me to produce more content. Unfortunately I haven’t dedicated myself to this approach, so it hasn’t helped much yet.

The approach is fairly straight-forward. Write non-stop on a topic for 45 minutes. Spend 15 minutes cleaning and organizing. Let it fly.

One problem I’ve run into when applying this idea is that I believe most topics deserve more attention than might be implied by the “1-hr-blog” tag. I keep a topic list on hand, so I’ll probably begin tagging some as “1-hr-blog” possibilities.

“Anything worth doing is worth doing poorly” is a quote I hear floating around, and I agree with the sentiment. Improvement doesn’t happen without practice and, more importantly, criticism. The perfect blog entry may be possible, but I may spend so much time trying to achieve it that the end result isn’t worth the time. Better to shrink the feedback loop.

Where “1-hr-blog” fails: anything that requires research or data collection. I can really only touch on topics with which I’m very familiar or just giving some quick impressions. Side note: posts that explain how to do something or present data that I collected myself perform much better than most, with good reason. Unique value gets unique attention.

My goal is to make 2018 the year of the one-hour blog. An hour a month isn’t too much time to set aside, and if I stick to it, I may find a way to really make it work.

You can probably look forward to more writing on hobbies and projects since I can provide more useful and/or interesting information off the top of my head. The topic list is already growing.

Agile vs. Formal Management Systems

I have spent some time recently navigating the differences between Agile Methods and development under a Formal Management System, ISO 13485.

I’ve spent a lot of time in and leading Scrum teams. While no group I’ve ever worked with felt like they were doing “real” Scrum, they all got the basic gist of small work increments with frequent updates among the team and stakeholders. I think this model has been useful mostly because the developers feel like they know what’s happening (or they have little excuse if they don’t), and the stakeholders feel similarly in the loop. Problems get recognized quickly and prioritizing of those vs. other goals happens regularly. Thus a project doesn’t go too far off the rails before a correction occurs.

Scrum recognizes that the requirements, and therefore design and testing, are never completely established until the stakeholder accepts the work. Up until that point everything is considered to be in flux to some extent.

Formal Management Systems on the other hand are concerned with demonstrating effectiveness. If something isn’t written down, it doesn’t exist. ISO 13485 is particularly all-encompassing with something to say about virtually every aspect of product development.

ISO 13485 (and others) also breaks development down into the phases: requirements, design, verification, validation, manufacture (it’s a manufacturing standard) with review and approval steps for each. The formality of this sequence reads like a traditional waterfall development process, and might seem jarring to an agile development group whose core tenets include “People before Process”.

One of the largest distinctions is that Agile Methods tend to be adopted voluntarily by engineering teams because they appear to provide benefits in terms of team flexibility and delivery throughput to customers. On the other hand, the only instances of Formal Management System adoption that I’ve seen are when an organization wants to demonstrate to another organization that they are using such a system. This might skew the organization motivations a bit and will almost certainly feel like a burden being placed on the development teams.

Does it need to be a burden? That’s what I’ve been working on.

Before I continue, note that I can’t guarantee anything here would be accepted by a Formal Management System audit. I’m not an expert on the interpretation of these standards (though I am certified “Competent” on ISO 13485:2016, for what that’s worth). Use your best judgment when implementing your system. Based on observation, auditors aren’t too particular about the techniques you use to achieve the requirements as long as you can justify your decisions to yourself and third parties.

Documentation

First of all, everything needs to be written down. Most engineering organizations have a wiki or other place to store written documents. If it has revision history and backups, you have met most of the document control requirements. In addition you will need an approval system and a way to observe which versions are approved and which is current.

Historically this requirement would have implied a lot more paper pushing. With electronic systems, this is little more difficult than other aspects of software engineering.

Training

In order to demonstrate that your team is competent to do the work they are doing, you’ll need to keep records of qualifications and training. Resumes are usually available on interview. Then it’s up to management to keep track of internal training and experience. Fortunately, this could be fairly light on the team depending on how you define competence and required training.

Most agile teams rely on a combination of review and regression testing to determine when things are going wrong. Training new employees usually consists of a “Getting Started” document, asking questions of the team, and relying on the review-and-test system to raise alarms when things are wrong. I believe writing this up as the process might be acceptable to a Formal Management System as long as you can show that you understand the risks.

On the other hand, you could introduce new training software that would be more at home in a multi-national corporation, drag your teams through reams of mundane slide-show courses, and definitively show that your organization meets the requirements. I think it’s clear which might grate the company culture, but this usually has the advantage of being readily demonstrable.

Requirements

ISO 13485 treats requirements gathering as a phase, but it allows for modifications provided they go through a review and approval process. Requirements modification is the normal state for an Agile team, so this could be made to fit an Agile development model.

Many Agile teams use Epics. We use Epics to represent our features and then break them down into Stories, verifiable product-change increments. Early in the Epic life-cycle, we collect a set of written requirements for the feature represented by the Epic. Thus, a list of Epics that are intended to be released is also a list of requirements.

Risk Management

This concept in the context of project management and engineering is relatively new to me. The idea of risk we have been implicitly using forever, but I had never considered making it concrete to better communicate the status of the development process and product to stakeholders parties. In retrospect, that feels like a big oversight.

Risk Management is a requirement of many Formal Management Systems, so there isn’t going to be a way around it. Given that, introducing Risk Management to our Agile development necessarily changes how our teams do things — but it can be Agile, and it definitely adds something valuable.

We implemented risk tracking by creating a new Risk ticket type in our ticket tracking system. Risks, at the moment, are analogous to Bugs that haven’t been observed. Review these periodically and raise the particularly bad risks to management, and you should have a system that conforms to at least ISO 13485.

We rolled Risk evaluation into our regular Bug evaluation cycle.

Design

Design is treated similarly to requirements and also allows for update given subsequent review and approval. Once again, design update is standard operating procedure for Agile Methods, so relevant documentation should lean on this heavily.

The collection of Stories and Risks that were produced from the Epic breakdown above are effectively our design. They list what needs to be changed, supporting documentation, how those changes should be verified, what Risks were addressed, and (more importantly) which Risks weren’t.

Manufacture

Most Agile software shops are going to be running a build server in either Continuous Integration mode. Most teams I’ve worked with have done daily full test cycles with continuous unit test cycles throughout the day.

Assuming your scripts are in good shape, this is by definition a highly controlled process. It should be relatively easy to document. Make sure you double-check that you’re verifying what you should and keep records of test results.

Review

I have been a fan of and have used lightweight changelist review for years. From over-the-shoulder to pull requests, it is a staple of Agile development. I think this fulfills a form of
manufacturing monitoring.

Review is a strength of Scrum and Agile Methods in general. Engineering teams meet and discuss issues and design changes (if necessary) daily. Stakeholders meet to discuss potential requirements regularly.

All of these group activities should satisfy the review requirements. Make sure to note that the meetings are happening, and who is present, especially when design and requirements changes are being made.

Approval

One aspect of Formal Management Systems that may seem alien to Agile teams, or at least ours, is the requirement for approval: approval of documentation, approval of requirements, approval of design, approval of release.

However, it didn’t initially occur to us that, by keeping all groups in the loop on a regular basis, we are effectively getting implied approval regularly.

How this is relevant to ISO 13485: approval of requirements changes – handled by an internal stakeholder meeting; approval of design changes – handled by the Product Owner Scrum role; release approval – handled by the internal stakeholder meeting.

The area where we haven’t been strong on approval is changes to internal procedures. Traditionally we have trusted that our team would do the correct thing when making updates and relied on the change history when things didn’t look right. After all, if someone was doing something wrong, they weren’t likely to get far without tripping over the build process.

Summary

This approach to Formal Management Systems may join with Agile Methods to make a very potent combination. The responsiveness of the company, team, and product is maintained without too much impact to the cadence of the teams.

The auditors will ultimately tell us whether this approach is acceptable, but it seems reasonable.

Financing the Independent Information Project

Before you read any further, I’m not an expert in any of this. If I were, I’d probably be publishing in a much larger outlet. I’m just trying to figure it out.

First of all, if you’ve got a project to work on, but otherwise no funding, you’re still going to have to put food on the table. The following list is a good way to keep yourself maintained. Note that it’s a good idea to keep your interests aligned — find work that supplements your other interest(s). Just make sure to thoroughly understand any legal agreements that you sign regarding intellectual property and outside work. This is worth talking with a lawyer about, and it may be worth dropping an employer over.

Making Ends Meet

Salary/Hourly – Stable work, but usually requires a significant time commitment. Make sure it’s developing some useful skills.

Freelance/Contract – This is similar to a salary/hourly when you have work, but you may spend a large amount of time looking for the next job. I like this idea because it puts you in a business mindset. However, the minutia can be a distraction from your project. Another advantage to this method is that you could slowly convert to contracts in support of your project.

Online Pay-Per-Code – I like the idea of sites like guru.com and RentACoder, but I understand it’s not an easy thing to trust a coder off the street nor trust a spec written by an unfamiliar hand. These tend not to pay well because the competitor pool is global.

Significant Other – If you’re fortunate enough to have someone willing to support you in your ventures, use it wisely and good luck.

Angel or Venture Capital – You’re beyond my experience.

Trust Fund – You’re probably set. You can stop here because you’ve almost certainly got much better resources available than the blog of an amateur.

Now that you’re presumably fed well enough to get up each day, we can consider how to finance the project that you want to work on.

Methods I Like

App Stores – There’s an advantage to creating small, useful versions of an idea and putting them up for sale: you can quickly find out if there’s interest. Of course, there are quicker ways to find interest, but this way might have some dollars attached.

Freemium/Patreon – This has been a boon for YouTube and other creative content generation. There’s no reason that it couldn’t be applied to software. Creators that tend to do well here have already established a rapport with their user base or audience.

Sponsored – To my knowledge, most major open source projects go this route, and it seems a solid plan. I haven’t much insight into how to get started with this. I assume it requires a project with a certain level of interest to get significant sponsorship dollars.

Kickstarter/IndieGoGo – These are some incredibly powerful ways to get projects going. However, they (Kickstarter at least) have been associated with some fraud. As a result, the campaigns usually either need a good amount of marketing, a team behind them that has already proved they can deliver, or a demonstrable product that just needs cleanup and/or production.

Business-to-Business – These are some of the most lucrative deals, but they’re heavyweight and time-consuming. As far as I can tell, this model usually involves a dedicated sales person or team.

Methods I Don’t Like

Advertising – My opinion of this revenue source has soured over the last few years. It didn’t seem so bad online — until it was: popup ads, interstitials, and copious inline ads have ruined the model for me. The appearance of ads within mobile and desktop apps led me to the conclusion that I would never want my work associated with this model. Ultimately, advertising networks are about tracking people and trying to figure out exactly who they are and what they like. It’s creepy.

A light advertising touch might be acceptable in some online cases, and I would want strict control over the kind of advertising presented to my audience. Otherwise it’s a line I would prefer not to cross.

Pay-Per-Use – This is a model that’s growing rapidly because the Internet makes it easy. It keeps your software relatively safe from theft and undesired manipulation. It also makes it relatively quick to deliver new changes, optimize, and identify and fix user problems.

With all of these great benefits, why don’t I like it? It seems to me to be the antithesis of open source — a model I am quite fond of. Open source maximizes availability of software and the corresponding learning from it. Pay-per-use is a service where the service provider could have little interest in sharing how things are done and near endless ability to scale.

Despite my reservations, it’s a powerful model that deserves consideration. I would just avoid making it the only path.

Other Thoughts

Mixes of the above are probably prudent. The more ways that you can accept funds, the more resilient you are to unexpected swings.