Faith Before Reason
Published on 31 January 2022
During the time between Christmas and New Year, I usually force myself not to work on anything productive and indulge in a lot of entertainment in a relatively short period of time so as to scratch that itch for a while and focus on other pursuits in the new year. This time I replayed Skyrim, and while most of it was unrelated to the content of this blog, one very specific line of dialogue was spoken in the game which I found interesting and worth sharing.
“I only know what she told me. She had a theory about soul gems. That the souls inside of them don't just vanish when they're used... they end up in the Soul Cairn.”
Emphasis mine, that particular sentence is an excellent writing device to get out of having to explain an outlandish claim. A genius character just “has a theory,” which, of course, eventually ends up being proven correct. Even in a world in which magic is commonplace, dragons attack travellers along the roads in between cities and the world’s deities can be freely found and talked to, it is still an excellent question: how exactly does one come across a theory like that?
We have been picking at the faults in rationalism, scientism and blind belief in The Science numerous times on this blog, however I feel like it would be in order to look into the nature of the scientific method itself. The discourse has been dominated by the “trust the science” types for what feels like a generation now, forgetting the motto of the finest organisation studying natural philosophy in history — the Royal Society’s “Nullius in verba” — that the scientific method is primarily a tool for verification rather than discovery.
While it would be prudent for software engineers to know the scientific method by heart, it is rarely necessary to actually employ it in our day-to-day. The errors in software that we develop can usually be solved with tools, heuristics, spontaneous creativity, or careful observation. I have yet to see a bug in my career that would require unleashing the full force of the formalised scientific method. It is slow, tedious and laborious, but it is invincible. There is no problem in any discipline of engineering that is resistant to it and one gets to the solution in the end given enough time.
Scientific method can only work on “what is” rather than “what should be,” so the entire process must avoid normative claims. “Make program better” is not allowed. One has to be descriptive and rigidly logical, otherwise the entire thing goes crumbling down.
The scientific method, as we know from school, consists of the following parts: stating the problem, proposing hypotheses as to the cause of the problem, conducting experiments to test each hypothesis, observing the results of the experiments, and deriving conclusions. Whereas stating the problem can be banal (”the program crashes”), coming up with hypotheses is the real first hurdle of the scientific process. Where do hypotheses come from?
This question confuses the materialist because there is no immediately obvious materialist answer to it. We could wave our hand and dismiss it by saying that “one just comes up with them,” but that’s not good enough. It is the most mysterious part of the scientific method, because hypotheses seem to appear from nothing. One might be sitting at a problem for days, trying to come up with a solution in vain, then stand up and go for a walk in the forest for an hour and, in a flash, a theory might appear in his head that eventually turns out to be true.
What is needed to explain hypotheses is faith. A scientist must believe that the world is driven by a rational system of physical laws that work predictably. That statement is not falsifiable though, and thus there is no way in which it can be accepted as scientifically true. We may always say that everything we have seen “up to this point” was rational and logical, but every second in the infinite universe we could encounter something that breaks the laws of physics and throws the entire theory out of the window. And even that is not true — there are plenty of things that have already happened or continue to happen that defy rational explanation. However, once we believe that the universe is rationally ordered, we can make predictions based upon the things that we see and know.
The reason why Europe was at the forefront of advancements in natural philosophy for most of recorded history is undoubtedly due to the fact that this belief in a rational order behind the universe has been in our “intellectual water supply” for centuries. During the transition in Ancient Greece from thinking in terms of symbolism, mythos, and the domestic and municipal religions
to philosophy we see the first occurrences of this idea. Five centuries before Christ, the Ephesian philosopher Heraclitus named it logos
, which is difficult to translate, but is generally rendered as “word,” “reason,” or “account.” It was later picked up by the Stoics, Plato, Aristote, Philo, Plotinus, and a whole manner of other philosophers, traditions and schools.
The revolutionary impact of Christianity that destroyed the primitive ancient order of domestic and municipal polytheistic religions and replaced it with the belief in a universal, personal God, the author of the universe, solidified the logos as an intellectual idea in the places where it spread. The Gospel of John begins:
Ἐν ἀρχῇ ἦν ὁ λόγος, καὶ ὁ λόγος ἦν πρὸς τὸν θεόν, καὶ θεὸς ἦν ὁ λόγος.
In principio erat Verbum, et Verbum erat apud Deum, et Deus erat Verbum.
In the beginning was the Word, and the Word was with God, and the Word was God.
As Christians, we believe that the Gospel was written under divine inspiration. This means that the Bible was not merely created out of thin air, but written by a person. Saint John was an educated man who undoubtedly was acquainted with the concepts of Greek philosophy at the time, and used language that was familiar to him, hence the concept of logos. Due to the lack of a word which would accurately represent λόγος in Latin, it was rendered in the Vulgate as “Verbum” — “Word.”
This deficiency of Latin, understood even by pre-Christian Roman philosophers, of lacking words sufficiently representing concepts found in Greek unfortunately caused a great deal of meaning to be lost in translation for us, especially those acquainted only with the Bible in national languages. Some do a better job than others (in French, for example, it is rendered as le Verbe), but the lack of familiarity with the idea of logos causes shallow and/or poor interpretations of this verse.
One of many innovations of Christianity in this respect, though, is the identification of the logos with God, and more specifically with Christ. The Word, after all, was not only in the beginning, and was not only God and with God, but also was made flesh and dwelt among us. This means that, contrary to the view of Christianity that it imposed upon us by contemporary culture and the cultists of Science, it has always been in the interests of Christians to know and understand the world. One of the objectives of Christianity is to glimpse at the nature of God, and since the logos, the rational order behind all creation is God, to study that creation means to learn more about its creator.
However, from the Bible we also know that man was created in the image and likeness of God, and as such we have the unique ability, found nowhere else in all of creation, to see the manifestations of logos in the world around us. Because of this, we have the ability to find connections in the conceptual space that have a possibility of being true — the hypotheses that later can be tested using the scientific method.
There is very little difference in the nature of philosophy and programming. In both disciplines one deals with layers upon layers of abstractions, which are groups of points in the conceptual space with some kind of a common factor, increasing or decreasing in scope. The only notable distinction between the work of the philosophers of old and the work of contemporary software engineers is that the former deal with the structure of thought in a theoretical way, whereas the latter do it in a more applied way and with a shorter feedback loop.
Indeed, if we look at the process of me typing these words on a keyboard, they first come out as key presses, that are translated to an electronic signal by the keyboard’s circuit board, travel down the cable to the computer, where they are processed as current, then binary and assembly, pass through a driver in the Linux kernel, are interpreted as text, sent through the window manager to a terminal window with Vim, committed into a Git repository, pushed through SSH across the ocean, deployed, and displayed on your screen as pixels that form letters that form words that form sentences that form paragraphs that form ideas. Each one of these steps is an abstraction that had to be developed. Some have been formalised, others are understood implicitly. Whether that happened in natural language, in the formalised logical notation or in C is hardly relevant.
What is relevant is that no part of this structure could ever be developed without following the path of logos to the appropriate extent. If that does not happen, the final product will lack quality. Deficiency of faith in logos and looking for logos without regard for reality seems to be correlated with the dichotomy of romantic and classical
understanding of the world. One becomes a romantic when the entire system is beheld as a one universal whole, and then not analysed thoroughly enough. A romantic scientist will perform experiments that do not correspond to the previously established hypotheses in any way, ending up with inconclusive results. A romantic programmer will write messy, unmaintainable code, with various levels of abstraction intermingled close together.
In contrast, one becomes a classicist when the individual parts of the system are not unified enough to create a structure that corresponds with reality. The vision of the order clouds reality. A classicist scientist will disregard the results of experiments that do confirm his hypothesis and looks for connections where there aren’t any. A classicist programmer might preemptively create abstractions that do not correspond to reality, resulting in code that is rigid and likewise unmaintainable.
In light of this, it’s easy to understand why the struggle to define “code quality” has been such a futile endeavour. Seemingly everyone in our field tries to do it, but inevitably fails, and then brushes the matter off as “subjective.” But quality is neither subjective nor objective, rather it is something that exists at the boundary of the two. For code to have good quality it needs to follow the same rational order that animates reality.
Science and faith are not in conflict, but one is actually a consequence of the other. Much like good science precedes good engineering, good faith precedes good science.