When we talk about software-intensive systems, there is an emerged trend abbreviated as DevOps.

And there are many DevOps aspects which basically converge into the single phrase - “innovate automate or die”:

One part of it I value above all is Infrastructure as Code (IaC).

Seriously. IaC is the practical way to organize operations change process. Otherwise, DevOps becomes a meaningless hype.

What’s in the concept of IaC?

The idea is as old as life.

Let’s take a look at the DNA. DNA drives execution and reproduction of any kind of complex beasts.

Software in real life.

DNA is not the beast. It only indirectly defines it.

A hand-made bird may look like a bird, but it won’t reproduce - there is no DNA for such species.

Hand-made bird prototype.

The alternative to a hand-made bird is some proper gene engineering defining required bird. Pre-engineered DNA for a bird can be used to replicate multiple instances of the same species with little efforts.

Code is the DNA-like indirect approach to building entire systems. Ideally, nothing changes in the system architecture without modifications in its code.

Code compilation worked this way since the beginning - you don’t change machine code directly - instead, you change the source code (DNA) and re-build the binary (the beast). IaC additionally defines your networks, hosts, platforms, and other resources through code.

Again, only code (DNA) keeps evolving. The infrastructure can never be changed directly - it can only be re-instantiated.

Excessive Labor => Process Control

If you write code, it is executed by machines.

The more machines execute, the more humans can afford to think about better solutions.

Machines at work.

Machines save the ultimate resource - human time.

There are basically two choices:

  • The repetitive work done by machines will continuously amortize the cost of “unproductive” human time to make these machines.

  • The repetitive work done by humans will only frustrate humans.

    Highly frustrated humans escape or take legal action, rise against and rebel, sabotage the results of work.

Practice shows that the more elaborated the setup is, the cheaper it becomes over its entire lifecycle to control the changes with code.

And machines have never risen against humans.

Wasteful Documentation => Concise Introduction

If we go back to pre-DevOps days, what we saw was a collection of applications. Each application was defined by changing its source code.

Everything outside these islands of executable application code was glued by a pile of documentation executed by humans:

  • design,
  • scaling,
  • networking,
  • installation,
  • configuration,
  • updates,
  • recovery,
Disabled knowledge.

The fewer procedures are automated:

  • the more work is pushed to humans repetitively,
  • the more overwhelmingly detailed documentation has to be written,
  • the more difficult it is to review/verify/test,
  • the more human-time is spent,
  • the more costly the project becomes,

Picture how documents lose any value:

  • They choke both human readers and human writers.
  • They are outdated and not trusted.

With code:

  • Documents become a mere introduction to intentions to set up a context for humans.
  • System infrastructure code is always updated to glue the islands of application code.

Documented intentions are more concise and more stable than any implementation or even design.

Costly speculations => Exact descriptions

What is wrong with nicely written colorful diagrams listing system components, their properties, and descriptions?

Artist's impression of a system.
  • Documents misrepresent real system.

    A system architecture is “multidimensional” with inter-related projections which tend to be described by various types of documents (dependencies, configuration, operation, recovery, …).

    These types of documents (projections) keep becoming inconsistent with every system change.

    No one ever did anything using precisely only documentation (without re-evaluation, discussion, guessing, workarounds, etc.).

  • Documents make update process prohibitively inefficient.

The solution is to use machine-processible system description (code) as the primary source of truth. All projections can be generated from that.

A source of truth.

You cannot hide inadequacy, inconsistency, incompleteness from a machine in code.

If you want to face the real complexity of your system, try to explain it to a machine and see all the flaws right away.

User-friendly HMI.

There is always an understandable proportion of people who want to know something but cannot deal with code.

  • Can they deal with details anyway?
  • Or maybe an introduction to intentions (possibly on a whiteboard) sufficient enough?

Basic Integration => Rich Functionality

One of the characteristics of open source projects is independence and reduced reliance on others - instead, they provide excessive flexibility in their configuration to integrate.

A "simple" web server alone.

Manual integration is only sustainable with “keeping it simple” approach.

How simple can it be before you start missing required features?

For example, the standard LAMP stack exposes a massive configuration space. The abbreviation still excludes any other necessary operation support:

  • Backup and recovery;
  • Application update procedures;
  • Encryption key management;
  • Firewall rules and tightened security;
  • Authentication services;
  • Logging and monitoring;
  • and more …

Don’t get me started about clustering multiplying all that…

Code can turn the mess into organized, deeply integrated solution.

Divergent builds => Identical rebuilds

Again, the ultimate cost of everything is human time. And it is more directly apparent in the software industry.

Give the same instructions to the same human twice, and he/she will execute them differently every time.

A couple of working hands.
  • There is no trust that two copies made by human are identical.

    Verifying sameness may pose an insurmountable threshold of human-intensive activities.

  • A machine, on the contrary, trivially executes the same instructions exactly the same way any number of times.

    And this assumption is trustable.

Moreover, once a machine “knows” how to build, its “knowledge” cannot degrade.

Needless craftsmanship => Useful innovation

Hand-made instances are naturally unique.

However, nobody appreciates craftsmanship leading to needless excessive efforts with reduced output quality.

What if competition employs automation to get the best of both worlds?

Crafted mugs.

Redefine craftsmanship - think code:

  • Machines become intensively busy

    (otherwise, why did you buy them?)

    … producing cheap instances.

  • Humans become innovatively busy

    (otherwise, why did you buy them pay them?)

    … increasing the value of the code.

Code is an opportunity to escape from competition making threshold to enter the market much higher.

Unrecognized Rework => Explicit Iterative Development

It is a fallacy that system is built only once.

Instances of an iterative environment.

You need a plant to produce a car. And both have to evolve.

Creation of a build often requires much more complex environment then deploying that build.

At least build-time tools have to be integrated - they simply do not exist in the production environment.

There are always prototypes, components, and sub-components, their versions - everything is set up one way or another.

And engineering environments better be independent:

  • development X number of developers,
  • testing X number of maintained versions,
  • production X number of customers.

All environments (experimental, staging, demo, stable, production, …) evolve concurrently!

Concurrent iterative environments.

The more environments, the more concurrency, the more features to market per time.

The question really is: how many environments can be afforded?

Any serious project will stall due to increased coordination and wait time to access a shared system instance.

  • How often do people interrupt each other, get confused by unexpected results and perform rework?

  • How often such interference go unnoticed and falsely misdiagnosed under wrong conditions?

  • How to remain sure about anything in uncontrollable changes?

The code allows spawning independent environments at any version from early prototypes to final releases tracking the evolution of every detail affordably.

Stored Artifacts => Mergeable Versions

There are several maturity levels of versioning:

  1. Store versions of system state images.

    Primitive level - it is effectively a storage of backups (see below) which only allow reverting.

  2. Compare any two versions.

    Intermediate level - it clarifies what exactly makes two versions different.

    The difference tells you what you gain or lose by reverting.

    Think about it. Without the ability to compare, how would you choose past version to revert to?

  3. Merge any two versions.

    Advanced level - it enables independent changes (branching). This point has to be elaborated…

Some of the data formats out there cannot be merged easily:

  • graphical images and other multimedia data
  • spreadsheet tables and other WYSIWYG documents
  • various archive files

Unmergeable format means:

  • Only one person can ever change it at a time!
  • Write access must be exclusive locking everyone else out!
  • All updates must be sequential!
Emerging order

Why would you branch if you cannot merge? Think about it.

Merge-ability is paramount!

Now, it may seem merging is commonly available. Nope. In general, it is extremely tough to implement.

Any code is a mergeable plain text. It seems like a primitive format, but its merge-ability allows advanced parallel workflows supported by a myriad of tools.

Scattered copies => True reuse

The same functionality re-occurs in different subsystems. And its direct re-implementation multiply rework on every subsequent update.

For anyone who avoids rework in software, it is a very well known problem and solution.

Code implies both the problem and the solution. Copying implementation has only the problem alone.

Onsite Development => Location-independent offline activities

Manual approach requires direct (onsite) access to the system environment to change it.

And what if remote access is not always available? Traditional direct manual updates become impossible.

Isolated environment.

The code can be developed, reviewed and delivered without immediately available environment necessary for it to run.

Applying the code to the target isolated environment is done at the earliest convenience.

With code, the development becomes:

  • offline and location-independent,
  • isolated from conflicts and independent of downtimes introduced by others.

Unseen distributed changes => Centralized management

For any system including more than one node, the configuration becomes distributed.

Over the time it will be increasingly difficult to comprehend what set of resources on which node are essential.

Brain mapping of resource locations.

On the other hand, the code is always managed centrally. And tools distribute resources automatically and precisely according to various executable conditions.

Centralized management via code may be the single most important factor to eliminate laborious verification and mistakes.

Higher Level Automated Testing

Integration tests are normally bound to a single target environment.

Code hegemony.

If code describes the system, any environment is few moments away. Even architectural and scaling changes may be covered by automated tests (number of nodes, their roles, network layout, etc.).

Reviews and Demos

Do you practice reviews or demos? And how exactly are these processes made frequent and practically possible?

You may suffer from the fact that there is simply no environment to demonstrate deliverable functionality.

Experiencing new features.

With code:

  • The demo is performed on a separate system fully instantiated on demand with machine-precision - you will see it now interactively.

  • Reviews are facilitated by highlighted and detailed code changes.

    Worry no more - details are never forgotten, but you may choose to ignore them.

Recovery

Parts of the system will inevitably need recovery.

Even before it is a hardware problem, a human may more likely introduce undiscoverable braking change manually.

Entropy in action.

Can you tell how long it takes to recover?

The question goes beyond any crucial production site. A project may include development and testing teams with their operations - the progress stalls on failures in their environments.

You may think about system redundancy, but redundant instances have to be maintained and precisely restored to make the system fault-tolerant again.

Apart from disaster recovery, there is almost a daily operation to reset environments to its previous versions or versions deployed for a particular customer.

With code, environments can be easily rebuilt and recover.

Security

Consider also “dirty state” as a security threat:

  • leftover notes,
  • temporary allowed connections,
  • unaccounted data copies,
  • test encryption keys,
  • weak passwords,
  • unused accounts,
Securing the mess.

If the system is compromised, the instances keep all these backdoors left unidentified and wide open.

With code, clean systems can be easily rebuilt preemptively.

Backups can hardly serve this purpose because “dirty state” cannot be selectively excluded.

Backups versus Code

Backups are the straightforward approaches to recovery.

Dealing with a huge one.
  • Backups are huge and slow.

    Compare mammoth with its DNA sample.

    You can still easily justify a backup for actual application data - something which is not re-generatable and precious.

    However, system instances should be re-generatable.

    The code stays slim for that purpose - any version recoverable anytime.

  • Backups are not comparable and mergeable.

    You cannot compare or merge two mammoths, but it’s possible to select their best properties in combined DNA.

  • All backups may be already contaminated.

    Imagine all hibernated mammoths you store in the freezer were terminally ill before they got there because of DNA. Without the ability to clone their DNA and fix it, they are lost as species.

    The code hardly has any limit of depth into its history to patch it against issues.

Backups actually complement swift automated recovery but surely unable to provide all the benefits alone.

Besides all that, backups require managing them - a sub-system. Shouldn’t it also be codified for automation instead?

Hidden Issues => Actionable Reports

If steps are executed manually (without costly pedantic review):

  • Exit codes are not seen.
  • Error messages may remain unnoticed.
  • The sequence of steps and their dependencies are violated.

If valuable feedback is not seen immediately, the chance to eliminate hazards early (away from the release dates) is missed.

The potential problems pile up, and most of them are not even recognized.

  • Did you get deployment report with all failed steps listed down?
  • Can you redeploy again and review a fresh report right now?
Visible issues.

Any automation instantly reports about a spectrum of issues across the entire integrated stack every time.

Reports are factual, precise, detailed, human-friendly and machine-processible.

Blind Troubleshooting => State Analysis

Troubleshooting requires reliable evidence and ideally in a preserved environment isolated from concurrent actors.

In its extreme worst case, the evidence is made up by humans.

Data captured by tools.

The more unreliable evidence is:

  • the fewer causes can be ruled out quickly,
  • the more human-time is spent on making things sure (instead of implementing a solution),
  • the more deadlines are crossed,
  • the more costly the project becomes.

And this chain may loop on every issue encountered!

Code allows cheap identical isolated environments with pre-configured sophisticated tooling to capture runtime state.

Instead of guessing, you are analyzing!

Hesitant Steps => Resolute Progress

Have you seen teams who avoid changes because they cherished single working setup?

Systems maintained manually accumulate all sorts of unknown modifications - configuration drift. Nothing works without this change, but nobody remembers what they are.

Do not let demands for stability freeze down any future progress.

Literally icebreaker.

A system code under revision control is undamageable and tracks all changes between versions with the finest granularity.

Spawn a system instance of any branch and maintain progress.

Conclusion

Let’s list the activities per section of this post:

These are very powerful capabilities. In fact, entire Agile movement (which is normally an empty hype by itself) is derived from these properties.

  • Everything above is enabled only by managing Infrastructure as Code.
  • Everything above is largely disabled by doing it otherwise.

It might be logical do to Everything as Code (EaC). However, there are limitations worth a separate post.


Polymorphism, Inheritance, Encapsulation are mysterious words.

OOP is many years away from its buzzword status. Everyone knows the keywords, everyone applies the mechanisms, but much fewer people (including me) actually speak them. Why?

The names of these tree cornerstone OOP concepts seem overdressed to me. They are fancy and trigger possibly unrelated imaginations. Their explanation is frequently rooted in theories and solution modeling. This is all truly beautiful. However, what’s the bare minimum to describe the concepts addressing practical issues?

Spending fewer lines

If Polymorphism, Inheritance, Encapsulation are indeed indispensable, what is their ultimate economic benefit?

To beat the competition in the software business, you need to address two problems:

  • Respond fast to changes in required functionality.
  • Avoid the human writing speed bottleneck1.

A simple try is to write less2 (compared to a language without OOP support).

Edsger Dijkstra once said:

My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.

Surface of potential changes

Polymorphism

To highlight the purpose, word “polymorphism” should not bring any associations with (poly)morphing, any morphing.

What polymorphism does in its core is something different - selecting the external code to execute. It delegates, or routes, or dispatches, … operations. It does not morph objects.

We needed support from the language to achieve different behavior (operation) based on different context (object/type). We got polymorphism.

It keeps client code stable no matter which operation gets executed.

class Rectangle extends Shape {
   void draw() { /* draw rectangle */ };
}
class Sphere extends Shape {
   void draw() { /* draw sphere */ };
}
void client(Shape s) {
    s.draw(); // select the right code to execute
}

Again, polymorphism:

  • avoids polymorphing the client code
  • does not polymorphs objects

Sounds contradictory, isn’t it? Well, it is because it is misleadingly defined as the property of the type instead of operation:

A polymorphic type is one whose operations can also be applied to values of some other types.

Hmm… Isn’t it more direct to mean choosing operation based on values? And the term “polymorphic type” does not even make sense in duck typing (e.g. Python) or static polymorphism in C++ (which is also duck typing to me, but a static one) - in these cases, types are not polymorphic.

Any novice in programming will need to google a lot to match it with reality (e.g. to discover differences in static and dynamic type).

So, we are supposed to write less by only adding new operations without changing existing client code. The new operations get selected based on static or dynamic context. Polymorphism (despite the awful term) actually helps.

Inheritance

Inheritance simply sets defaults.

What else can be added to it? Nothing. I’ll just elaborate.

In fact, none of the languages I work with uses inherits keyword - it is almost always extends. And the point is that we just set defaults and override them - quite a mundane phrase instead of mysterious “inheritance”.

We can save many lines of code in our explanation of what Square can do by setting defaults to Rectangle:

class Square extends Rectangle {};

Now, whatever clients could do with Rectangle, they can do with Square. And, by default, it will be the same.

Rectangle r = new Rectangle();
r.turn(90.0);
r.draw();

Square s = new Square();
s.turn(90.0);
r.draw();

Beyond defaults, inheritance is often used to enforce type safety or setting other constraints (like Square is Rectangle). I count that as a side effect because there can always be different syntactic support for just setting these constraints.

So, we definitely write less if defaults are reused. Inheritance (despite the pompous term) helps too.

Encapsulation

Encapsulism

Encapsulation is prevention from relying on unstable (internal) details. That’s it.

How does it help us to write less?

  • It may not help us to add less code.
  • But it helps us to change less code.

When unstable details change, there is no code which relies on it (and need to be modified in response).

Encapsulation is probably the only word that suggests its own meaning. We just define the component’s boundary: external and internal parts, or public interface and private implementation.

The approach enforces one of the fundamental design principles - minimize the impact of (typical) future changes. Violations can be caught without writing tests right at compile time - access to private members from the outside of the class will fail… (I’m sorry for Python here).

So, we proactively stop clients from changes (prohibiting them from using “uninsured” operations). Should we rename encapsulation to “insurance”? Nah, that drives even more unrelated imaginations.

Summary

  • Polymorphism: reuses client code (for new operations).
  • Inheritance: reuses defaults.
  • Encapsulation: keeps client code stable.

If use of Polymorphism, Inheritance, Encapsulation does not make you change less, you are probably using it wrong.


  1. The primitive way to look at the code writing limit is measuring typing speed. However, if typing was a problem, there would already be some solutions to produce rapid output. The real issue is accepting code in production - code should be thought out to meet requirements, fit well in architecture, written readable for future changes, debugged, reviewed, tested in integration, etc. Choosing one programming language over another may help to reduce the amount code per given functionality - this moves you away from reaching the limits. And the question is how applying OOP mechanisms serve this purpose. 

  2. It is important to point out that small code size alone without readability is, of course, useless. You need a read-write code, not a write-only-and-forget garbage. But we are not comparing languages or styles here. Instead, we compare the same choice of language/style with and without applying OOP techniques. 

Occasionally, I still run into development tools which sell people on the idea of visual programming.

There are particularily prevalent cases in the enterprise world which typically sounds something along the lines:

This tool allows architects to draw a domain model using diagrams.

The diagrams are translated into source code skeleton for developers automatically.

Multiple languages are supported as the output: C++, Java, SQL, …

If this source code is ever assumed to be updated by developers manually, the approach will likely fail

… unless the issues discussed below are thoroughly addressed.

Visual Programming versus Visual Presentation

I want to emphasize the difference between directions of conversion:

  • Visual Programming - the WRONG approach

    • Source: manual visual diagrams

      The visual diagrams are modified via special visualization software.

    • Result: generated source code.

      The text files with code are still normally generated to provide input for a compiler.

  • Visual Presentation - the RIGHT approach

    • Source: manual source code

      The source is the code (hence, the “source code”).

      The source code is modified via text files (using any text editor).

    • Result: generated visual diagrams

      The visualization of the source code is done by various tools.

      For example, IDEs may draw class diagrams or dependency graphs.

These are essentially two different directions on the same two-way road.

And the road should only be one-way!

I warn specifically against the impractical 1st case. Why the 2nd case only?

  • Practical benefits rooted in the fact that most software is written/read in code (text).

  • In turn, using code is motivated by its immense flexibility and support from wide-range of easy to implement tools.

What’s wrong with visual programming?

The mess and inconvenience introduced by visual programming tools defeat the benefit of seemingly more illustrative diagrams they provide.

It is one-way the wrong way

The first problem is that visual representation goes the wrong way: the diagrams are the “source”, and the code is “destination”.

Any manual change of generated code will be overwritten with the next change of a diagram.

And this wrong way of update direction is often the only way!

Can you load existing code into visual programming environment?

It often couples presentational layout and implementation data

This issue alone makes life miserable. The (noisy and frequent) graphical layout changes are indistinguishable from changes in implementation.

This is similar to whitespace changes in text files, except that you may not be able to tell them apart due to a possible binary format of the produced artifact (see next).

It often uses binary formats and integrates poorly with other tools

Entire spectrum of tools are plug-able into the text processing infrastructure:

  • text editors,
  • revision control systems,
  • text processing tools,
  • static code analyzers,

How does a visual programming tool fit in here?

Are you able to take any tool of your choice to analyze “diagram sources”?

Even in cases of open text-based format, it won’t be convenient for consumption because the produced artifacts were not meant to be looked into directly without rendering them in the visual tool.

It can hardly provide change review

For example, how to compare differences between two spreadsheet files?

--- hello-world.sh      2016-06-02 13:54:13.910585354 +0800
+++ hello-world-file.sh 2016-06-02 13:53:31.174356629 +0800
@@ -1,4 +1,4 @@
#!/bin/sh

-echo Hello, World!
+echo Hello, World! > file.txt

If you cannot clearly see changes between versions, the confidence of making steps forward will be drastically undermined.

Think again: if you don’t know what has changed, you must be concerned.

By the way, how do you quickly send bits and pieces of the solution “written” in visual programming tool to a remote peer for review? Text is immediately copy-and-paste-able (diagram screenshots are not paste-able).

It is not flexible enough

The visual programming tools will crawl behind possibilities achievable with the combination of components working on “standard” text.

For example, does it allow regex search of attributes or comments within diagrams (or you have to wait for the next release)?

Moreover, such tools may incur lots of mental pain with every change in their own GUI layout. A new layout is like a new town - everything has to be rediscovered again. Also, experience inevitable bugs with such over-responsible software.

It does not scale well with details

Visual programming is not allowed to hide details. In fact, it may even expose more of them.

It needs to hold all required details to generate necessary code! Otherwise, it can only be used to create a skeleton for a prototype - irrelevant for product development.

If it generates database schema based on diagrams, how sure are you about controlling every fine-tuned piece of SQL?

Just imagine visualizing every detail of any project source code. Isn’t visualization supposed to hide unnecessary details and be limited to a particular context?

It lacks mindshare

  • Is the tool’s knowledge wide-spread and generic?

  • Is it commonly adopted?

  • Or is it something the Internet knows nothing about?

Due to limitations explained above, answers to the questions are usually negative.

Living examples

All the visual design tools developed for website design in the past are currently not popular anymore.

Modifying HTML/CSS directly and reviewing result in the web browser is the most common implementation process among designers!

Just compare it to the most of the examples of popular software related sites and blogs. Will you be able to communicate details in answers and comments on StackOverflow as easily as the text can do?

Conclusion

This post is a warning for those who choose tools for next projects at the current state of things.

Visualize software on whiteboards or documents outside of the code.

Keep code the ultimate source. Derive from it, visualize it, not vice versa.

Visual programming approach will only get you started (with a prototype). Instead, choose tools which keep you going.


Right off the bat, conclusion: the ultimate cost of everything is human time.

Does everyone realize how profound it actually is?

The Ultimate Resource

Go through the cost composition of anything valuable…

Every resource/service in existence accumulates the cost of human time spent delivering it.

Costly stones.

Take honey, for example, - it is produced by bees alone. If humans are not involved, why does it cost anything? Because (last time I checked) bees do not simply give it up to humans, let alone packaging and shipping the honey to a local store.

Humans will use bees as long as dealing with bees takes less human time to produce acceptable honey than any other method.

Gold is freely available in many locations (ultimately, abundant in space) - just spend an abundant amount of time to extract it.

The price of gold is balanced by supply and demand, but it could be disrupted by supply alone which is limited by human time cost only.

The price of air is completely disrupted by its immediate availability (zero human time to get) at any habitable point on this planet.

Saving human time is the only value a resource/service gives.

Omnipresent Energy

There are attempts to evaluate the cost of everything in energy spent.

Scientifically cute, but it does not make much sense because:

  • Everything is already made of pure energy (E=mc²).

  • The portion of energy we actually control is proportionally negligible.

Space stuff took billions of years to uncontrollably evolve into itself - waiting time we cannot afford. Instead, we use little controllable energy to save time by obtaining “pre-cooked” resources like gold or honey.

Again, the cost of energy we control is only incurred by human time. If we did not spend time to produce, maintain, monitor equipment and materials to harness this energy, it would cost nothing.

Time Is Not Money

We emphasize the importance of human time by phrase “time is money”, but we only value money because of our limited time.

Just in case someone questions that - remember that even infinitely small profit can be used to save infinite amount of money with unlimited time.

Moreover, money can buy anything except time because:

  • Time is the only resource which cannot be stored.

    And the only option we have is to store other resources now to save human time to retrieve them later.

  • Time can only be exchanged one way.

    There is no way to buy own time for other resources - we can only save time by using other resources.

Money is merely a resource with the highest liquidity - exchange through any other resource (gold or honey) would simply require more human time (physical movement, storage maintenance, value decay).

The value of liquidity (money) is essentially the value of human time.

Technology

Technology should simply be defined as methods to save human time.

Shouldn’t technology be the prime investment strategy then?

There is a limit of how low anyone rates its own time in terms of other resources (gold or honey). And this limit constantly climbs higher.

Sooner or later the group of humans around better technology will win in the trade with another group because (ultimately) only human time is limited.