Tag Archives: 1-hr-blog

Thoughts on ReactJS

I’ve spent some time with ReactJS recently (and in the past), and I thought I would make some notes.

ReactJS is a UI management framework created for managing the presentation and interaction of UI components on a web app (or native mobile applications with ReactNative). It manages the structural document changes behind the scenes and renders them to the browser/screen.

There is a paradigm called Model-View-Controller (MVC) for UI development. The Model represents the data layer, the View represents its display, and the Controller represents the code used to mediate the two. I’ve said in the past that if the Model were rich enough, the Controller would be unnecessary.

ReactJS fills at least the View side of the MVC paradigm — possibly even the View-Controller. There are some standard data-management libraries that frequently accompany it, but they are optional, and I have not used them.

For each ReactJS UI component, you follow the lifecycle through the construction, presentation, maintenance, and teardown of the bits that get rendered to an HTML document.

Despite my love for the functional model, UI design is fundamentally an exercise in state management. There is no user interaction with anything that isn’t changing small stateful pieces one at a time to achieve some effect. However, I would say React does a good job of delineating where state change should happen vs. the side-effect-free computations.

The maintenance phase of each component is very unstructured. State updates happen either through internal updates (this.state) or through the owning component (this.props), and it’s up to the developer to wire the downstream effects together via JS code. The one exception to this is the update of the presentation, which always happens if the component state changes (except in rare circumstances).

In the past, I built small UIs of communicating state machines. React would have been a great tool to have to help with the management of adding and removing components, but that’s about where the benefit would end. I was ultimately going for much more granular control of the UI component interactions. I would rather spell out explicit data flows and state changes in a model than have them implicitly buried in blocks of code.

I think React has the potential to be the foundation of a much richer UI management framework. There are frameworks like AngularJS and VueJS that I’m much less familiar with that may already do what I’m looking for. I’ll have to check them out at some point. My preference for the “minimal” option took me down the ReactJS path, and I like it.

Creating with Quality

I read a book not too long ago, Lila by Robert Pirsig, where the author describes a system for organizing information and performing work. In it he describes two parts, a change agent and a lock-in mechanism. The change agent could be anything — a person or any kind of random interaction — anything that can bump the system into a new state based on a set of rules. The lock-in mechanism is the way that changes get stored and checked for usefulness. He likens this to a ratchet where a little work can be done, checked, and stored in a state where you can leave and return to make more progress at a later time.

This applies to virtually all creation. The universe is a soup of particles being tossed about operating under a strict set of rules:

  • Assuming for discussion that quantum is the base
  • Quantum wave/particle interaction yields a stable atomic system
  • Atomic interactions yield a chemical system
  • Chemical interactions yield a protein system
  • Protein interactions yield a DNA system
  • DNA manipulations yield codified social systems

I may have skipped some steps, but I think it can be seen that each layer rests on the foundations of the previous layer. Each of these systems are subject to changes from various sources. Each of these layers has a mechanism for storing and/or replicating those changes to be acted on at a later time. DNA is an incredibly rich system, but it’s nothing compared to the level at which we are/will-be operating intellectually.

Every factory comes from a blueprint and list of processes for creation. Stores and shops facilitate resource distribution, and offices are home to countless business-value processes. These are all improving regularly, sometimes like clockwork with a predictable pace.

If you have a system that allows you to make changes and check them against all the expectations of the system, you can very quickly deliver new features with confidence.

This model easily applies to software development. In fact, we explicitly structure projects in this manner. The engineers and designers are the change agents for obvious reasons. The revision control, build, and test systems are the lock-in mechanism. The software power-shops have their lock-in time frame down to hours, if that. They can make changes and push them to customers and users extremely rapidly. The trick to making this work well is a high fidelity ratchet — your tests.

Completeness is important in your testing. Missing and delivering problems can be like slipping a tooth in your ratchet. If enough teeth are slipping, you’ll find it gets very difficult to advance as the expectations of your system grow.

Frequently you’ll find that you need to improve the ratchet itself. Less work per stored change (architecture) or less verification time (testing overhead) will bring the ratchet cycle time down, ultimately making you more productive — and by much more than the reduced overhead. When an idle 15 minutes could turn into a product 15 minutes, the frequent little boosts add up.

I personally prefer to be able to do something useful and verifiable within 30-60 minutes. If that’s not easily doable, I tend to put time into my ratchet.

The small-cycle ratcheting technique can apply to other types of creative work as well. Model simulation and rapid prototyping techniques are quickly turning relatively complex manufacturing into an everyman’s game. I think we’ll be better off for it.

Configurable Mobile Devices

I think it’s time for a configurable mobile device platform. Like ATX of the previous generation, this could serve as the foundation upon which many fun projects are born.

As the industry fights for thinnest and lightest, they’ve shattered many form and function boundaries. Form and function are separating. Function no longer requires the volume, mass, energy, or cooling of past technology. As this separation progresses, we have a lot more leeway on form, but we seem strictly focused on sleekness. A standardized mobile form might cost some volume and weight, but could still be well within the state-of-the-art dimensions of a few years earlier.

I write this on an Asus UX305FA, an impossible device a mere decade ago. The thin-and-light notebook market has exploded with models and options, all of which have a slightly different mix of features, none of which seem to match any particular person’s needs. Most probably aren’t terribly successful products.

The die-hard DIY crowd has put together projects involving clusters of Raspberry Pis and other groups of small computing units like Intel’s NUC. People are making this work, awkwardly, without any particular standard. The motivation for creative exploration is there, but the industry isn’t facilitating as well as it used to. This slows creativity.

Project Ara by Motorola and then Google looks like a good swing at this idea, but they may be thinking too small — smart phones might be the wrong target. The one place where space is at a premium is in your pocket. However, tablets, notebooks, and even high-performance systems could benefit from a fresh look at smaller standardized form factors.

Of course, with this idea comes all the traditional problems of customizing systems of “standard” parts. Components could be slightly wrong on the spec, parts could conflict in unspecified ways, you’re responsible for whatever monstrosity you manage to cobble together yourself. However, the opportunity for creative experimentation is enormous. Companies like Dell, HP, and Compaq grew up supporting their particular grouping of standardized components. There is still a healthy market of PC customizers and modders. One need look no further than YouTube channels like LinusTechTips to see the enthusiasm, both on their part and the part of their subscribers.

With a mobile component interface standard, component manufacturers would have more freedom to experiment on their own in their domain. Phone, tablet, notebook, and stationary case manufacturers could experiment with all kinds of forms what wouldn’t necessarily survive the design process of a mass produced device. The same is true of the components themselves.

This certainly wouldn’t make the large manufacturers like Apple, Asus, and Toshiba irrelevant. Someone still needs to push the leading edge of technology. It might even make their products better. Their current innovations seem to be more miss than hit. A customization community might provide an idea pool from which they could refine the next major features.

If there are other groups pushing in this direction, I would love to hear about them.

One-Hour Blogging

There is one rule: write a blog post in under an hour.

I often tell people that they should blog. If they’re an expert on a topic, write about that. If they’re learning about a topic, write about first impressions and relationships to other things that they understand. People love a good “rise to competence” story. Maybe you provide something valuable to someone. Maybe you demonstrate competence to potential employers or partners. Maybe you make contact with interest groups that you didn’t realize existed. Maybe all that comes of it is improved writing skills.

Unfortunately, I’m not good at taking my advice.

I came up with the idea of the one-hour blog to get more thoughts recorded and published. I mull over all kinds of topics and write a lot, but when it comes to publishing I rarely feel like the topic is fully covered, the writing clear enough, or points are accurate enough.

The idea first struck me a few months ago when I thought that time-bounding my blogging might force me to produce more content. Unfortunately I haven’t dedicated myself to this approach, so it hasn’t helped much yet.

The approach is fairly straight-forward. Write non-stop on a topic for 45 minutes. Spend 15 minutes cleaning and organizing. Let it fly.

One problem I’ve run into when applying this idea is that I believe most topics deserve more attention than might be implied by the “1-hr-blog” tag. I keep a topic list on hand, so I’ll probably begin tagging some as “1-hr-blog” possibilities.

“Anything worth doing is worth doing poorly” is a quote I hear floating around, and I agree with the sentiment. Improvement doesn’t happen without practice and, more importantly, criticism. The perfect blog entry may be possible, but I may spend so much time trying to achieve it that the end result isn’t worth the time. Better to shrink the feedback loop.

Where “1-hr-blog” fails: anything that requires research or data collection. I can really only touch on topics with which I’m very familiar or just giving some quick impressions. Side note: posts that explain how to do something or present data that I collected myself perform much better than most, with good reason. Unique value gets unique attention.

My goal is to make 2018 the year of the one-hour blog. An hour a month isn’t too much time to set aside, and if I stick to it, I may find a way to really make it work.

You can probably look forward to more writing on hobbies and projects since I can provide more useful and/or interesting information off the top of my head. The topic list is already growing.

Functional Programming Definition

A dozen years ago, when I was first investigating functional languages, the distinctions between functional and not were much more apparent. Functional languages had functions as values and parameters, closures — an inline-defined function that captures (closed) context for use in future invocations, and if not outright prohibiting variable updates most of them discouraged programming styles that did variable updates.

Today, most new languages are a thorough mix of imperative, object-oriented, and functional styles. Functions-as-parameters, and closures (or something roughly similar) are as common as the procedural paradigm in the 90’s. However, the one key point that has been largely missed is the discouragement of modifiable variables. The former two styles — imperative and object-oriented — basically require it.

As a result, we’re left with programmers today making a lot of the same non-functional mistakes of years past, and the software is little better for it.

Therefore, my favorite definition of “functional language” is what is more commonly referred to as “pure functional” — languages that do not permit side-effects (state changes). This idea breaks down around the edges of software design, and that’s where a lot of diversity among pure-functional languages can be found, but it is the property that gives the functional style the majority of its benefits.

When we imagine the intricacies around implicitly changing state as part of a computation in systems that are large and separated, I think we should begin to understand why Everything is Functional.

Everything is Functional

At a high enough level, all computing is functional. If your language isn’t functional (state-change and side-effect free), your process is isolating it. If your process is modifying global state, then your hypervisor or individual machine is isolating it. If your machine is working with a cluster of machines to maintain and modify global state, then all other machines not in the cluster work in isolation.

The model of global state is a difficult one to maintain as scale increases. The time to synchronize data grows and the number of complications due to stale data gets worse. At the largest scales, the distinction between database and communication layer (think pub-sub) breaks down, and they effectively become one. This is the model of tools like RethinkDB where the query becomes an asynchronous request for updates to a particular subset of the data model.

The latest software modeling paradigms make a point of restricting state to a locale via microservices. Each microservice is allowed its own state, but communicates with all other parts of the system via a common message bus. Tools like Storm and Spark specialize in connecting these services together into a larger dataflow model.

It makes sense. Physical locality is guaranteed to eventually constrain software, regardless of the programming models we’ve gotten away with in the past. I think we would do well to recognize that, when stretched to the largest scales, software is relatively small units of computation and the data flowing between them (just like hardware). Aligning our heads, tools, and techniques around this model is likely to yield the best results long-term.

Pick up and learn a functional language today!

Haskell
OCaml
Scala