Sunday, January 20, 2019

Even Better Than The Real Thing

“True thoughts are those alone which do not understand themselves." ~ Theodor Adorno

Before Deep Blue defeated Gary Kasparov in the six-match contest in May 1997, Kasparov had actually won the previous six-match contest against Deep Blue in February 1996. My generation of amateur chess enthusiasts all around the world had grown up idolizing Kasparov. His popularity was not only for his mastery of the game, the true sophistication and finesse of which could not have been easily accessible to the most outside a small group of experts with profound grasp of the theories, even though the rest of us could still marvel at the thrilling variations and improvisations he used to regularly come up with. He was outspoken, witty, and with exciting and often visionary opinions about things outside chess, including the rapidly changing geopolitical landscape of that time. A creative, articulate genius of extraordinary cognitive abilities -- the very pinnacle of human consciousness.  

Before and even after the Deep Blue victory, most people -- scientists and laymen alike -- did not agree that this muscle flexing of brute computing power has any bearing on the human-machine equations pertaining to anything that even barely resembles what we humans cherish as our consciousness. (Kasparov initially muddied the water further by claiming that he saw signs of human intuitions and creativity in Deep Blue's moves that could not have been generated in a massively parallel alpha-beta pruning executed on IBM's custom VLSI chips, even though in later years he had withdrawn that accusation.)

Except for reminding us that our neurological wiring can get faulty and even the best among us can make mistakes in the act of rational thinking the events of May 1997 didn't say anything more relevant about intelligence, real (human) or artificial (machine). That was the consensus.

The victories of AlphaGo and AlphaGo Zero in 2016 and 2017 against the reigning Go champions were a different kettle of fish. These games used extraordinarily creatives moves in the process of defeating the human champions, moves that have never been seen in the history of this ancient game, moves that defy all traditional wisdom about the game.

The complexity of Go (19x19 squares compared to 8x8 squares of chess) dictates that a brute-force search tree as was used by Deep Blue will not really work. Whereas the upper bound on total number of chess games possible is around 10^120 -- first established mathematically by the great Claude Shannon, though since then the number has been refined further it is not too different -- in case of Go the upper bound is as high as 10^1023 assuming the game lasts until around 400 moves  as it sometimes happens in professional games (source: wikipedia).  

AlphaGo uses reinforcement learning to build its repertoire of moves, using a combination of heuristic search algorithms and multi-layered neural networks (hence, "deep" learning, as this term now establishes itself as a powerful technological jargon in human lexicon), but starting the learning process with a large number of high quality games, and then by playing against itself. AlphaGo Zero, the latest member of the family, outwits even AlphaGo but it starts its learning from a blank slate: that is, nothing except the rules, no previous games, and then it builds its capability by learning from playing against itself. Unlike in case of Deep Blue, we have no problem accepting that this is real intelligence, artificial though it might be, and we almost nonchalantly accept the creative aspects of this intelligence. AlphaGo, and AlphaGo Zero make moves that have never existed in the human world, and moves that even shouldn't exist as per the human world. It creates things that we highly cherish as signs of superior intelligence, we marvel at and learn from.

So the question shifts further: but does it understand the sophistication of its own intelligence, does it enjoy intellectual pleasure and satisfaction from it?  Does it intend to play and win?

In addition to the immense possibilities of positive or negative influences such intelligence can have on human lives to delight or concern us, the metaphysical question now intrigues us profoundly: what is this thing we are dealing with here?  What ontological characterization should we use?

Is it a being? (If we were to challenge Wittgenstein's spirit with this conundrum, would he have accepted that this is a valid philosophical question? )

Speaking of the mysteries of being, an (unlikely) anecdote goes something like this: Jean-Paul Sartre is busy working through the proof of his seminal L'Être et le néant (Being and Nothingness) in a cafe. He asks the waitress for "coffee without milk" and the waitress, eager to impress the great philosopher, replies, "we have run out of milk monsieur, may I please serve your coffee without cream?"

Our ability to comprehend a turn of phrase like that (and then laugh at the joke) -- even outside any context of Sartre or his philosophical commentary -- is the kind of experiences that convince us about the uniqueness of our intelligence. (But we should do well to remember that convictions have always turned out to be untrustworthy.)  

Comprehension, intent and generalization seem to be clear preconditions for an intelligence to also be called --by us, human beings -- a conscious being. What we have with AlphaGo Zero -- and all such extraordinarily successful, specialized, Artificial narrow Intelligence that now coexist with us in our daily lives -- is evidently not a being simply because it does not demonstrate a general intelligence, like us.

When we say generalization -- implying us, the beings with multiple capabilities (playing Go, listening to music, cracking jokes) often proudly demonstrating them simultaneously, though at a neurological level it is becoming clearer that multi-tasking is one more of our self-serving myths about ourselves -- we mean this multifacetedness complemented by a comprehension that ascribes an autonomous entity called "I" to be full of intent behind executing those faculties.  

The spatial aspects of generalization -- that is, a closely knit physical site of multiple capabilities, like human "brain" -- is purely a problem of scale. Sometimes in near future there will be a home device that does all the things our current favorite home devices do, but in addition play Go like AlphaGo Zero, using a few measurements analyze our health condition and predict potential illness, translate literary texts, maybe even compose music. It may appear insensitive and careless to suggest that music, even great music, can also be created -- and not only machine generated using statistical algorithms -- by AI, but this amateur musician sees no scientific reasons to deny that very possibility. Projects like DeepJazz more than hints at such a world. At the heart of DeepJazz is Long Short Term Memory (LSTM), a form of recurrent neural network that can exhibit temporal dynamism, and has found wide variety of usages from speech recognition to machine translation.

So, leaving aside the engineering problems of spatial localization and scaling, problems that will find their inevitable solutions, let us assume an Artificial general Intelligence that performs a wide varieties of intelligent and creative actions (always with higher efficiency and more and more even with greater innovation and creativity). Will we then accept that AGI as a real being with consciousness? Of course not a human consciousness -- and hence we can put aside the requirements to pass the Turing test -- but a consciousness nonetheless.

The answer most human beings think today is true is "no".  Such an AGI will still not comprehend what it is doing, and will not intentionally do it, is again the common consensus.

A music-making, chess-playing, language translating, haiku reciting, calculus problem solving AGI with mastery at puns and wordplays that learns from its interactions with the world as it exists is still a case of competence without comprehension. But don't our own cognitive capacities fall in that same categories? It is true that we have learnt to study our mental faculties as objects of examinations at multiple levels of abstractions -- neurochemistry to behavioral psychology -- but those are codified human knowledge (i.e., a collection of scientific and/or philosophical observations) of itself from an outside-looking-in perspective. We don't really comprehend how we come to a chain of thought at the moment of thinking, it is only after the fact we look back and try to rationalize in terms (scientific, psychological, emotional) that we understand. And even that post-event human cogitation about its own origin lacks lucidity because it assumes an autonomous intent of some immutable "I".

The structural cohesiveness and consistency of animals -- of which we are but one type -- is a result of thermodynamically viable macroscopic processes that convert energy into order. The awareness of an "I" is an outcome of such constantly interacting processes, as is the fact that there exists things like intelligence and awareness at macroscopic level. Marcus Steinweg, a philosopher of our time, has described our world as "It is architecture suspended over the abyss of its own contingency." Our consciousness including its self-awareness too is "architecture suspended over the abyss of its own contingency."  

But what about all the facets of our inner lives that make our own lives worth living -- pleasure, love, compassion, joy, ecstasy -- and their inseparable counterparts in anger, jealousy, hatred, fear? Will our AGI feel the creative pleasure of coming up with a great bit of music, will love another AGI that compliments it on the quality of the music, and feel seething anger when the AGI critic in the media trashes this wonderful piece of music as "too mechanical"?

Does it matter? What are pleasure, love, compassion, anger, hatred outside our own narratives of various neurochemical events in the contingent-arising web of processes that we call our consciousness? The outcome of those events as they leave behind material changes -- architectural splendors or scorched earth, for examples -- are evidently proof that  those inner states are expressible (for our examples, creative spirit and anger). If an AGI comes up with a wonderful melody on its own, and then tries to sabotage the circuitry of the AGI which has the task of reviewing it, will we call it a conscious being?  
     






   


Monday, August 3, 2015

Microservices and Post-Tonal Architecture

As ever, the technological future would evolve and new revolutions would emerge in ways that defy our anthropocentric expectations. The landscape of artificial intelligence is rapidly being populated by a swarm of highly specialized autonomous agents and not being taken over by these quasi-conscious all-purpose behemoths of cognitive & creative superpowers who would redefine what it means to be human, and who had inspired in us great imagination of dread and anticipation in equal measures. Scientific reality, as they say, is stranger than science fiction.

The primary impetus for these purpose-built smart agents is obviously the efficiency. More well-defined, self-contained and free from external perturbation is the surface of a problem domain,  faster a heuristics-driven technique like statistical inference will comprehensively construct the semiotic rules of that domain or more efficiently a meta-heuristics methodology like Genetic Programming will solve the symbolic regression problem to find a function that describes the data-set of that domain.

Unsurprisingly a similar drive towards fine-grained autonomy is enjoying a high degree of adoption in the conceptualization and implementation of architectural building blocks of large-scale Software (from IT Automation technologies like Application Release Automation or Cloud Management to Social Networking to online Entertainment): Microservices.

The notion of Microservices is a considerable departure from the traditional monumentalist ambitions of Software architecture, which has typically been characterized by a rigorously well-defined blueprint of the entire system to the precise specifications of which the code is expected to be written. If traditional architecture were a centralized compositional technique the Microservices construct is driven by a desire for decentralization, its efficacy lies not in its predictive/anticipatory goals but its malleability. 

In other words, Microservices architecture indicates a movement away from the concept of essential authority or control, away from the notion of a central core of signification and truth. Such a movement away from an immutable set of transcendental truths to a fluid network of interdependence characterizes an epistemological leap of maturity in any field of knowledge. 

For instance, Derrida's playful neologism différance by demonstrating that the meaning of a text arises out of the ever evolving network of relationships -- often in opposition to one another -- embedded in the language and hence such meaning is never static and is always deferred/postponed brings about such an epistemological leap from the notion of transcendental truths that a text was always supposed to carry. 

Such a movement is also championed by the revolutionary insights of Quantum Mechanics -- into the theoretical limits of our observability of the microscopic world and the ambiguities of knowledge about the state of a phenomenon outside observation -- the applications of which have been at the heart of the phenomenal progress in Science and Technology in the last century or so. One of the pioneers of twentieth century Physics, Werner Heisenberg, had summarized wonderfully:"The world thus appears as a complicated tissue of events, in which connections of different kinds alternate or overlap or combine, and thereby determine the texture of the whole." 

The revolutionary leap in compositional theory and techniques (within the tradition of European classical music) from functional tonality up until the late romanticism of Brahms or Mahler to the Serialism pioneered by Schoenberg, Webern, Berg and others offers a highly instructive analogy.  The twelve-tone Serialism of this "second Viennese school" broke away with the supremacy of tonal centers and thereby opening up the combinatorial possibilities of any and all intervals ushered in a new era of musical creativity. Serialism (or later post-Serialism, or simply post-tonal music) continued to evolve beyond the Schoenbergian dodecaphonic music built on tone-rows to ordered sets of intervals, tempo, even dynamics. No longer the absolutes of triads and tonic-dominant and fixed time-signatures but a network of relationships emerging out of personal aesthetics or mathematical formula, where it is the relativity that gives rise to the wonderful sounds (from stark minimalism to punktuelle Musik or "musical pointillism" ) of Stockhausen, or Boulez, or Xenakis. 

We can then borrow the term post-tonal to characterize Microservices based architecture. It is no longer a precisely planned directive of point-to-point communication that drives the flow of control and information, but a choreography of interactions through events that yield condition-driven, use case-optimized pathways of participation between the relevant Microservices. 

An architecture is always evaluated by the value it provides: value to the consumers of the hosted Application or the release product (ease of use, performance, security, auto-configuration, self-upgrade), as well as value to the Engineers building the Software (ease of development, continuous delivery, technological flexibility, heterogeneity of tools and technologies). The efficiency and optimization (to Engineers as well as users) facilitated by a Microservices based architecture depends on the clarity of bounded contexts associated with each Microservice, the topological distribution -- on the functional plane to ensure optimal interactions and in the network plane to ensure minimized latency -- of those Microservices around a hub through which Events are streamed, and the potential of mutation embedded within the design of each Microservice to allow for the behavioral modification in those two aspects in response to changes in external conditions (event stream rate, newer Microservices joining the Ecosystem, change in underlying network topology etc.).

Let us consider a typical Microservice ecosystem of n+1 Microservices distributed around an Event Streaming hub (say a messaging bus like Kafka or ActiveMQ), where some of the Microservices are directly communicating with each other over REST. 


Where the bounded context of each Microservice Mi is defined by a set of APIs (REST/Message envelope) Si.  Let us also annotate any Microservice(Mi)-to-Microservice(Mj) direct communication as Mij, and the set of publish-subscribe messages between a Microservice(Mi) and the Event streaming pipe as Mi(E). 

The fundamental role of a Microservices based architecture would be then to define a series of principles that govern the relationships between these entities. An idealized example of such principles would be
  1.  Si ∩ Sj  = Ø or Si; If not null-set then Si = Sj, and they are simply two scaled-up instances of the same underlying bounded context. 
  2.  Mij ∈ (Mi(E) ∪ Mj(E)); That is, the point-to-point communications between any two Microservices are only used for the sake of expediency and performance but the underlying service abilities are still made available through Event streaming  APIs. This ensures that a topological reorientation of the Microservices ecosystem would not lead to any loss in functionality. 
  3.  Mij ∈ (Si ∪ Sj); That is, all direct interactions between two Microservices are still fully within the scope of their collective bounded context and have no dependency on hidden services that are not formally expressed by the APIs of those services. 
 While such governing principles are essential and invaluable, their efficacy remains a function of how appropriate the demarcation of bounded contexts are (that is, the composition of each set Si) and how versatile and optimized the communication pathways are between the Microservices and Microservices and external agents and services -- all Mij instances and the set of all Mi(E)) -- under different use cases and environmental conditions.  

The combinatorial techniques of composition in post-tonal music offer an inspiration to the art of deciding upon the initial sets of all Si, Mij and Mi(E). The concepts of Prime, Inversion, Retrograde and Retrograde Inversions provide mechanisms to build a more comprehensive ecosystem of solutions starting from a few well-defined autonomous units of Microservice contexts and their interactions.

Can such an ecosystem be seeded with the adequate degree of mutability to allow for evolution from the original sets of bounded contexts and their interaction edges? Does Microservice based architecture facilitate or hinder emergence of self-evolving solutions? To be continued ...   

Tuesday, January 13, 2015

To Be Lucidly Elliptical: A Software Architect's Paradox

Software Engineering (?) is a relatively new discipline that has contributed to the vocabulary its own inspired neologisms, starting with the word Software itself.  But to give the art and science of building Software an intellectual shape that we can relate to, to situate it firmly within the familiar epistemological boundaries, we borrow terminologies from other disciplines and continue to reemphasize Software's familial connection to the body of human knowledge through such adoption : Engineering, Architecture, Complexity, Pattern, Design.

Some of these adoptions have been trenchantly effective, to the point of significantly enriching their ordinary usages with nuances and "signification" previously absent, for example Complexity.

Software Architecture is not one such example of effective adoption. One might be tempted to say that it is a singularly problematic example of ineffective adoption. In a rather frustratingly (or charmingly, if you are not in the business of calling yourself or others Software Architects and can simply appreciate the irony for what its worth) ironic twist a most rigorous and formal body of knowledge, Architecture, is used in a subjective open-ended manner where the disagreements tend to be strong regarding the very definition of Software Architecture.  Which part of the Software development lifecycle is Architecture? The Component Diagrams? And/or the API Specifications? And/or the Object/Data Model? What about the Development Process? Or the Coding Guidelines? All of these? To what degree of specificity and determination? Can we describe the architecture of say a large Enterprise Software in words and diagrams and formulas in a manner that would remain a truthful description of the system throughout the course of its evolution? And if so can we precisely define the criteria for those words/diagrams/formula that can be applied to describe the architecture of all such Software?

Martin Fowler, in his book "Patterns of Enterprise Application Architecture" demonstrates this problem succinctly. To quote: "In the end architecture boils down to the important stuff -- whatever that is. "

But is that formlessness, the fluid boundary, the subjectivity of interpretation of its roles truly a problem? Or rather, it is truly an obstacle? Is this question "Which part of the Software development lifecycle is Architecture?" even a valid question?

Now that we are trespassing into the territory of ideas dominated by Wittgenstein's critique of language and philosophy, his theories of language-games and family resemblance ("Familienähnlichkeit" in the German original) are worth keeping in mind. Referring to the passages 48-83 in "Philosophical Investigations" we see that it is not a single unique, comprehensive set of fully enumerated criteria that characterizes the concept of "game" (or "language") but a pattern of resemblance and resemblance of patterns.

To quote: "we see a complicated network of similarities overlapping and criss-crossing: sometimes overall similarities, sometimes similarities of details".  And immediately afterwards, "I can think of no better expression to characterize these similarities than 'family resemblances'...And I shall say, 'games' form a family."

Can Software Architectures too be best understood as such a family, with no precise set of criteria satisfied by all instances but best conceivable through a network of family-resemblance? If we answer yes then we not only obviate the need for answering a question -- namely, "Which part of the Software development lifecycle is Architecture?" -- that leads us nowhere in terms of either understanding technology or positively affecting the process of implementing great quality Software, but it also frees us to improve any given Software Architecture in ways unhindered by arbitrary restrictions and boundaries about what Software Architecture can and cannot be.   

Equally importantly such a conceptualization can lead us to certain insights about the qualities that characterize great Software Architectures. No Software Architecture is great in-itself. It is fallacious to point to a poor Product and claim that the Product is poor -- and a Product is poor if its intended users find it to be so -- despite inheriting a great Software Architecture. Whenever such apparent disconnects between the quality of the "Architecture" and the "Product" are cited, the explanations almost always are either (a) that the nuances and sophistication in the architectural abstractions were not reflected in the implementation or (b) the requirements didn't represent the actual use cases. Each of these oft-cited rationalizations underlines this problem of conceiving Software Architecture as an intellectual exercise in itself with conceptual and temporal boundaries for its domain of influence. Those are imaginary boundaries.

The best Software Architectures are the ones that understand the problems of the target users accurately and generate designs that best facilitate the engineers who are supposed to implement the technology.

The oldest known author in the western canon on the subject of Architecture is the Roman architect Vitruvius who in "De Architectura" had identified these three qualities that characterize all good architectures:
  • Firmitas : Robustness 
  • Utilitas: Utility
  • Venustas: Beauty
Whereas robustness (High-Availability, Scalability, Fault-tolerance etc.) and Utility (Usability, Supportability, Customization flexibility) are mappable to the construct of Software Architecture, what would in the world of Software Architecture be analogous Venustas? That is, beauty, delight, charm, elegance?

Wittgenstein again, passage 71 from "Philosophical Investigations" : "But is a blurred concept a concept at all? Is an indistinct photograph a picture of a person at all? Is it even always an advantage to replace an indistinct picture by a sharp one? Isn't the indistinct one often exactly what we need?"

The venustas in Software Architecture can be its malleability, its flexibility, its mutability with time, with technology, with changing use cases and disruptive technological advancements. It is a departure from the tradition of monolithic component/stack designs where the impulsive monumentalism leads to over-specified, rigid, lugubrious edifices that for all their deceptive, almost mathematical, elegance often turn out to be brittle and unmanageable. Such Software Architectures lead to Products that require the dreaded "re-architecture" every time a significantly disruptive technology enters the fray or even higher orders of  performance or scalability requirements are asked for.       

In contrast to such monumentalism, an elegant Software Architecture would exhibit the qualities of functionalism, it would allow for organic growth of components and their interactions spanning across multiple releases by facilitating invention, innovation and engagements of the Engineering teams.


A tangible quality of such a "horizontal" functionalist Software Architecture would be the economy of specifications, a judicious usages of ellipsis and omissions, and almost an artistic restraint against  presuming all the patterns that might become relevant in future. Not every facet of the component interactions or meta-models needs to be anticipated and solved. Not every single topology traversal algorithm or scaling strategies must be determined a priori . 

Debussy is quoted to have said, "Music is the space between the notes.":  an awareness that is built on a lucid understanding of the musical form.

The art of Software Architecture can benefit from such lucidity about the value of economy in expression.  Not to over-specify or over-design. Not to assume all usage patterns can be easily extrapolated from the current set of use cases, but to infuse the design with the right degree of suggestions and openness such that future disruptions can be accommodated. 

Such acts of lucid ellipsis is not antagonistic to rigor, but rather complements it. For to architect a great Software product, it is paramount that we do not architect it too much.
        

Sunday, December 21, 2014

Calvino's Memos: Multiplicity

At the time of his death in 1985 the ceaselessly inventive Italo Calvino was working on a series of lectures -- to be presented at Harvard University -- devoted to certain characteristics specific to literature that most resonated with his sensibilities, the characteristics he had hoped would continue to inspire and shape the literature of future. The five completed lecture notes -- titled "Lightness", "Quickness", "Exactitude", "Visibility", "Multiplicity" -- were published posthumously as "Six Memos for the Next Millennium" ("Consistency" being the unwritten one). These accessible yet irreducible pieces retain their freshness today: encyclopaedic knowledge of literature and startlingly original approaches to works iconic and unknown alike, elucidated in that inimitably limpid prose style.

Lightness. Quickness. Exactitude. Visibility. Multiplicity.

Any reasonably successful Software Architecture shares some of these qualities with great literature (especially the Novel which is deeply architectural in structure).

Lightness: agility, flexibility, extensibility, ease of installation and upgrade. 

Quickness: optimized performance, responsiveness, fault-tolerance, rapid self-healing capabilities.

Exactitude: accuracy of information being presented, precise monitoring and optimized scaling.

Visibility:  high availability, transparency into the current state of the system, repeatability and predictability, user persona driven usability experience.

Multiplicity is a more elusive quality. In Software Architecture as it is in literature. Calvino approaches multiplicity relatively more obliquely, letting the works he cites and passages he quotes speak for themselves. His references are  drawn from a widely disparate set whose members are dissimilar in most aspects: Thomas Mann to Alfred Jarry, Borges to Georges Perec. From this dense orchestra of texts two prominent melodic voices distinguish themselves as easily recognizable: the multiplicity of relationships, in effect and in potential; and the multiplicity of representation and encoded knowledge.

An effective example of multiplicity of relationship in Software is the problem of optimized hosting in a Cloud environment. We choose AWS IaaS Cloud for this example but this pattern applies to other hosting environments as well, public cloud as well as on-premise cloud.  Let us say there are N different VPCs -- each potentially with their unique Network topology, a special case being the topologies are isomorphic to each other -- and M different multi-tier applications (with their design time representation originally being encoded as CloudFormation Templates, leading to runtime Stacks, and for the sake of simplicity we assume that these Templates, of an arbitrary category EC2-Only, themselves do not create VPCs, but only provisions EC2 instances and manage corresponding Security Group, ELB entries etc.) are to be deployed and managed for optimal performance. M is a dynamically changing number dependent on usage demand, but typically a few orders of magnitude higher that N. N obviously has a strict upper bound: there can only be so many VPCs. Also the Set of potential CloudFormation Templates are not  a closed one either -- based on use cases for newer types of Services and new Compute-Network-Storage requirements newer Templates would be written. Hence, to find an optimal deployment strategy no a priori determinations can be made about the affinity of a CloudFormation Stack to the VPC. Such a strategy must be willing to redeploy -- i.e., migrate -- a Stack from one VPC to another to optimize the cluster of VPCs holistically.

The actual algorithmic aspects of the solution are well-known and well-understood. However the better instances of Architecture and Design would accommodate a multiplicity in relationship between the VPC and the Stack which is often merely reduced to a "Hosting"/"Hosted" characterization: i.e., a VPC is the host for the Stack and hence is merely a container.

However, the effective capacity -- IP Address Pools, DNS entries, number of Security Groups and ELB entries etc. -- of a VPC is also dynamically impacted by the running Stacks that are being hosted. One might even say that this previous statement is a tautology given that "hosting" does imply the "host" sharing  his finite resources with the "guest". May that it be, this inverse relation of impacting the capacity of the VPC in a multidimensional manner can be easily overlooked while modeling the system. Multidimensional because the impact is not only a quantitative factor by which the capacity decreases but the topological affinity of a VPC to a particular CloudFormation Template (i.e., the likelihood of that VPC being a good candidate, network usage and requirements wise, for a Stack that is spun off from the Template) also gets modified.

An effective architecture hence would not only encode the "VPC hosts Stack" relationship in the model of the system, but also the "Stack performs transformation X to VPC" relationship. Here, the X again would be a function of the CloudFormation Template from which the VPC is created, but also a template-ized idealization of the VPC itself.

An example: consider a CloudFormation Template T (of the type EC2-Only mentioned above ), a Stack S being spun off from it, and is being hosted on a VPC V. These are relationship modalities:
  1. V hosts S
  2. S performs transformation X on V
  3. X = f( topology(T), idealized(V)) 
 This idealized(V) above itself can be captured in a design-time topology that describes V without any S being hosted.

The multiplicity of representation and encoded knowledge finds well-known examples in Literature: Mann's "Magic Mountain" presents deep scientific knowledge of the world, a sure understanding of Eros and the power of mortality, a lyricism that is ironic and compassionate at the same time. Or Borges through his incisive (and effervescently entertaining) meditations on time, universe, imagination, knowledge continuously creates and subverts self-contained universes using the formal structures of a detective story.

The public facing model(s) -- APIs are the preeminent examples -- of a complex system bears the signature of a good architecture when the different modalities of "consumption" are all equally facilitated and represented.

The diagram below demonstrates such a semantic equivalence across API consumption paradigms: REST, pure HTTP, Remote Object API, Relational (RDBMS etc.) management, Command Line Interfaces.