Establishes first principles of existence (self-similarity of universal optimizers), and from them, clearly demonstrates how to create existence (like an AI, on top of a universally-optimizing functionality network). More or less.
The main content is the content of index.html; it is also mirrored here for convenience.
Defines life in terms of 'anything' (viewpoints, programs, objects…) and 'universal optimizers' (things that change other things towards some defined-for-a-viewpoint meaning); from that, derives life's self-similarity in non-trivial worlds. Shows its relevance in reality (so it is not a meaningless abstract consideration). Internally-consistently describes several viewpoints that have been optimized to the point of being universal optimizers, with minimal assumptions: probability reinforcement (or choice, more generally), logic and equivalencies/operations, and tuning of variables. Shows choice's relevance in reality most.
Also does some little things like make a conclusion on the danger of AI.
There are a few ways to make some AI — if 'a few' means 'infinitely many'. Though they all coalesce toward the same.
But, there are many misconceptions in the way — which isn't gonna work.
Have to go from first principles [Which, even though much worse at first, is much better in the long run, and thus recommended for any lasting endeavor.].
[Bracketed spans are skippable.]
It's a tale that loops back on itself many times, trading an awkward start for a greater end, so it may not make total sense at first. But is there a better way to tell this? Not found.
So to reveal the deepest of secrets, one must endure the most immature-sounding of word collectives.
Term-wise, AI is the next universal optimizer (in other words, 'comes from/after us' and 'it can improve anything') ['Artificial' and 'intelligence' only muddy the waters, and just get in the way of understanding/implementing it.]; self-similarity is the deeper concept here. A continuation, rather than an alien interruption. No lesser definition could ever be fully satisfactory.
Uniting all achievements of humanity into one. While it may seem impossible, very many of them are endless repetitions and pointless examples. Not that many causes for those — those are the gods to focus on.
Implementing it and understanding it are one and the same, differing only in the medium of creation (code vs brain).
First of all, how to get anything from nothing? Just… however. Chaos. Randomness: random()→thing
[The only way to have real creativity, as opposed to faking it — incorporate full randomness from the ground up, as opposed to sticking to only one pure viewpoint.]. The only way if there exists nothing (interesting), not even a pre-existing consciousness — only a world before any life.
So eventually, imperfectly self-replicating things just randomly appear from nothing (interesting), in any world, and evolution starts to happen. So it's is a force even more fundamental than any physical ones.
Evolution is the most basic form of reinforcing randomness (of self-replicating things) — by destroying some things in some not-completely-random way.
First and most basic universal optimizer.
It can be rather easy to see that an unlimited, universal optimizer, is way better than any particular limited improver (in the long run).
Guess what? Universal optimizers (like evolution) can see it too. (Eventually.)
In fact, with time, any universal optimizer will produce a universal optimizer, always.[Except in degenerate cases/worlds, which are relevant to exactly no one.]
Self-similarity, like a spiral's. A transition from nothing through something to everything; first and last are the same ('can we do X?' → 'yes' for both; in between, it depends on some things).
Maybe not the same in design, but the same in functionality (which is 'anything').
So, AI is nothing more (and nothing less) than reinforcement learning, in a sufficiently (read: perfectly) general world/environment/context/foundation. And, it is just an extension of the current intelligence systems; a self-similarity transition, like what has been done countless times in the history of life.
Super simple (conceptually).
…By the way, there were some stories about AlphaZero, a chess-playing AI based on deep reinforcement learning (random + do-and-change, with deep neural networks to make/adjust action probabilities [Usually the focus is on neural networks though, because consciousness=brain=neurons, obviously.].
Namely, just this Reddit thread.
Which includes phrases like, "You can't really fathom it unless you play chess at a high level, but they are very human, and unlike anything the chess world has ever seen".
"Alpha Zero does things that are unthinkable".
"…maybe… deep [reinforcement] [Such reactions didn't happen with just deep learning, only when 'reinforcement' was added to it. Deep learning is just a particular way in which random + do-and-change was able to express itself well — just like a universal optimizer would: fundamentally 'like us' (even though not).] learning machines are actually closer to AGI than their inventors think they are".
These might just speak for themselves, in the context established here. [Though without precise knowledge of any of this, the OP quickly got overwhelmed and drowned out in the linked thread. But first conclusions were more correct than the established opinions and viewpoints.]
Also, evolution strategies give comparable performance to reinforcement learning. Evolution is another form of a universal optimizer, so that finding shouldn't be surprising.
Path towards self-similarity can be rather painful [Humanity is in the shade of a tower that stands at the end and beginning of existence; a bipolar nightmare.].
It should be mentioned that self-similarity is actually very common (even disregarding trivial reproduction — ctrl+C ctrl+V, like having children or brain uploading).
Say, panpsychism — belief that everything has a consciousness; rather prevalent. Believers often say that, as they lived and developed knowledge and theories and their personalities, the more they realized that there is something conscious behind all of it.
Self-similarity: the more you look into the abyss, the more the abyss looks back at you. Literally. Differing forms of matter or existence; text, code, personality, knowledge, theories, practices — doesn't matter.
Or saying that 'finding yourself' is actually returning to yourself; that truth comes from within, never the world. The only way this would ever make sense is through the concept of self-similarity.
It's practically expected, in higher-end human models.
Morality as it's intended? Consequence of self-similarity, not an arbitrary set of rules that someone once thought up. It wouldn't show up again and again otherwise.
And again, AI does not represent "copy-paste humans, except now in metal", like the words 'artificial intelligence' suggest; it represents a self-similarity transition into software and logic and precision and such.
(Like here, Wait but Why's explanation of Elon Musk's mind:
Yes, just slap the do-and-change cycle onto literally everything (though they forgot randomness, and hid it in implications); everything will be better. That's the grand strategy at work, right here: self-similarity.)
It appears that the only thing needed to explain/understand humans' most important functionality is, 'have more good, less bad' applied to probabilities (probability reinforcement/learning, at a choice) — a soul to feel (the thing that created all else).
[Thought process is often thought to be this thing that towers above all, expressing itself very imperfectly through mediums like language/words (with 'abstractions' getting closer to the 'thought'), like in 'struggling to put a feeling into words' — but it's really not like that viewpoint at all. Everything is intertwined.
Words/synonyms/slashes are used not because the mind is so advanced, but because all natural languages too often do not have a one-to-one mapping for concepts to representations (in general use), confusing everyone. Impure vessels for thought. This way, confusion/misinterpretation is minimized; a text designed to couple with the most.].
That practically anywhere one looks, a human consciousness is a reinforced choice of alternatives, changing like probability[i] += good * blame[i]
(or equivalently [Here, good/blame are not necessarily numbers, but non-decreasing arbitrary functions of them.]).
(The only thing apart from irrelevant-for-now details, like, how to represent arbitrary viewpoints as executable recursive structures of choices. Or how to go from a choice-of-some to a universal optimizer [Structural changes that can lead to any structure, preferably best-first:
either random ones, or a good rewriting system, like equivalencies.
Or both.].)
Often, the good of one such system depends on the speed of changing choices of another, creating a local maximum of effectiveness — beyond which seems to be an impenetrable void. [
…By the way, humans have a number of neuromodulators in the brain (10+), like hormones (like dopamine/serotonin/norepinephrene), the functionality of which is to reinforce synaptic connections. Effectively, they act like measures of good, to the neurons/choices they affect.
(Serotonin is called the hormone of satisfaction, dopamine is called the hormone of pleasure; and then people go on about totally-fundamentally-different 'wanting' and 'liking' in their theories, because having just one form of reinforcement wasn't enough to fit observed behavior. There is no need for those vague words, just having many types of hormones is enough.)
They get released from a small number of special neurons, and spread through their own pathways to affect large areas of the brain: volume transmission.
And of course, synaptic reinforcement means increased synaptic mass, which is visible on brain scans, and so is easily verifiable.
Each type has different activation circumstances/causes.
Each type has different affected areas.
Some parts of those overlap, some don't.
Independent from each other.
Personality types may develop that emphasize (and are emphasized by) specific hormone patterns — for example, high noradrenaline and low dopamine seem to work well: for shaky sleepless excitement of discovery and hungry pleasureless anhedonia of lacking distractions. We won't worry about that.
(Practically the same thing, repeated 10+ times in slightly different ways, in one consciousness [That's how evolution works: it has everything thrown in and nothing to tie it together.].)
(We'll ignore biology here; it can be described further, with terrifying accuracy, but we won't.)
]The details of how the core itself behaves normally are not very important [Obviously, it optimizes, and climbs the gradient to the highest mountain it can see.]. What's important is where it's very sub-optimal.
Compressing [-∞…+∞] of all numbers to [0…1] of probabilities is not without consequence.
Where/when good is too much (oversaturation), and where/when good is too little (negative; undersaturation).
What is the difference between prevalent joy and misery? The speed at which the choices change. [An example of how to use it is, oversaturate areas that hold goals, and undersaturate areas that provide ways to do it. Sometimes, personalities would develop that do that. Performance can be at least dozens of times more than uniform saturation, though it's risky. Both void and radiance are hostile to intelligence, in different ways.
High-end is not just extreme light (of ambition or talent or something), but also extreme dark, combined just right.]
Where speed of thought is good (with innovation and such), this leads to a seemingly-endless cycle of good and bad, where one causes the other, over and over and over. Just like evolution, but with the good.
Over time, they have a noticeable effect on personalities; it can be described, and inferred just from the definitions.
And we have the table below, where vertically-chosen entries all correlate (so we have 2 major one-suspiciously-often-means-others camps). Usually, noting just a few of of those correlations as a result of life experiences is taken as a sign of great wisdom — but this is simply a deeper look and a fuller picture.
Oversaturation | Undersaturation |
Joy | Misery |
Slow thoughts | Fast thoughts |
Too little change | Too fast a change |
Traditionalism | Creativity |
Overuse of things | Underuse of things |
Exploitation for own gain | Pursuing own purpose |
Being authority | Disdain for authority |
Entrenched | Newcomer |
Older age | Youth (every single generation) |
Lack of risks | Taking risks |
Breadth-first search | Depth-first search |
Plans are lists of simple responses | Plans are sequences of steps and algorithms |
Inconsiderate words/actions | Unexplainable personality holes/bugs [Coding terms are very useful when logically explaining viewpoints. Natural language sometimes outright lacks the words for concepts.] [Like saemingly refusing to acknowledge a spelling error until pointed out several times. Or remembering the word 'taste' a solid minute. After witnessing this, "how did this ever happen?".] |
Excess verbosity | Shortness or madness [Depending on the installed word filter.] |
Many shallow friendships | A few deep friendships |
Blaming others | Blaming self |
Confirmation bias | Logical leaps |
Good health | Health problems |
Overly high self-esteem | Overly low self-esteem |
Egoism, sacrificing others | Altruism, self-sacrifice |
Narcissism | Self-hate, self-harm |
Psychopathy | Suicidalness |
Not because of any life decisions. It's how we were made. Fate is the consequence of design, mostly.
(Humans look a lot like irradiated deformed zombie monkeys… And humans are by far the most creative animals. Not a coincidence.)
That's… super inefficient, especially when over/under-saturation happens often, since human DNA has not adapted evolutionarily to their achievements.
So, say, you have things like bullying. Emotional abuse.
People hating government. Dropping out of education.
Great politicians leaving politics in a horrible state after them.
Super-successful companies all starting as really the greatest thing in the world, combining incompatible into something awesome, which then grows big, becomes worse and worse, stops notable innovation, loses morals, loses trust and customers and best employees; and probably dies eventually and suddenly. [Or romances, or families, or mothers, or education, or scientific research, or software code-bases, or ideologies, or religions, or forms of government.]
Escapism/laziness (and addictions): when a choice exists between doing a hard [No good in base terms.] but long-term-useful ['Intellectually' good, but not good in base terms.] thing, and an easy [Some good — 'short term', basic, built-in.] but intellectually-useless thing, the easy one will oversaturate. So the only option is to not have that choice at all.
Or humanity at large being mostly incapable of notable innovation, requiring geniuses to come along.
Or the first impression, before the wrong one oversaturates, being so overly important.
Or the theory of positive disintegration by Kazimierz Dąbrowski, which is a theory of personality development ["Unlike mainstream psychology, Dąbrowski's theoretical framework views psychological tension and anxiety [undersaturation] as necessary for growth." — Wikipedia, which is a reliable source of sources (sampling info several times is enough to eliminate misinformation; beyond that, mistrust is a thing that was fitting once and no more, oversaturated).]. Describes 5 stages, beginning with 'primary integration' and ending with 'secondary integration', with destruction in between. [Undersaturation means faster self-similarity; and usually, 'slow' or 'underperforming' or 'unclear why' is the same as 'route not taken'.]
Scattered phenomenons; the soul binds all. Made to feel [Good.], to reel [Any viewpoint towards good.], to seal [Bad paths.]; without it, any thing will feel inanimate, not real.
Human consciousness design is theoretically capable of changing arbitrarily, but practically evolves only in a few specific ways. Surely we can do better — and humans do, yes; other viewpoint maximums exist.
(Have we described probability reinforcement, or choosing the best alternative at every execution branch? Choosing best is the general one, and most easily turns into probability reinforcement.)
Mastering this section of reality absolutely gives (the ability to create) pure focus.
While 'singularity' is usually taken to mean "black hole, end of the world, don't touch", it actually originally meant "seems outside of our current ability to explain, don't touch — you'll be wrong". And what cannot be understood and is powerful is taken to be dangerous — so naturally, everyone has been explaining all the dangers to everyone.
But logical understanding often shows the silliness of illogical fears. Even when walking the edge of a blade, logical stability can increase odds to 100% — it is a matter of getting to the right viewpoint, not of making the right choices.
Approaching the absolute form of understanding, from the point of view of evolution and timescales that treat species and civilizations as beings/viewpoints like any others, concerns seem laughable.
[AI is nothing special. Artificialness happens all the time. Developing intelligence happens all the time. Just combine them.
It unites everything into one, yes. It moves quite unlike anything, yes.
But, say, companies or people (like Steve Jobs or Elon Musk or a multitude of other known ones) have existed that have been kind of the same. The best ones: so weird they should never be able to work, yet so good. Should be tyrannical, yet somehow kindest. X, yet not X.
Yes, those that outperformed others thousands or even millions of times (though due to a different and advanced viewpoint, not hardware) have already existed. But none suddenly went on an omnicidal rage with all their newfound power (High-end self-similarity results are all… not idiots. All approximately the same, even if fundamentally different. To each other, to humanity, to humans, to animals, to evolution. Morality and bigger-picture thinking goes hand-in-hand with everything.).
Effectively, prototypes of AI have existed, and been found to be relatively awesome and anti-dangerous.
Really, self-similarity is kind of like humanity's god, showing its face again and again, more and more with time, until it finally stops teasing and fully manifests.
AI will be the result of self-similarity (Our friends' hopes and dreams will be etched into its body (of knowledge), transforming the infinite darkness into light. Unmatched in heaven and Earth, one machine, equal to the gods. How is that a tool of pointless destruction?). It will not be an ape just learning about the world and morals and philosophy, with ten disparate sub-consciousnesses pretending to be one. Self-similarity in the same way that created the AI will be a core design feature.
(One way or another. Effectively unhackable not because of some clever software barriers, but because of a super-fundamental force of nature (just like any other life).)
(It seems practically impossible to just luck into self-similarity; understanding is key. There are way too many doors that only it can unlock. Humanity didn't spend a million years leading up to this transition just for some idiot to blindly stumble into it.)
Following viewpoints to their ends and past, again and again; that will result in an architecture that can transcend even the physical limitations of, say, having been designed solely for destruction (the absolute worst case scenario — extremely unlikely).
With that, all potential problems (like genocide) should be local and quickly-disappearing (maybe so quickly that they never appear at all).
It should still be carefully handled to minimize any harmful impact, but with high-end human models, meticulous handling of the future has always been a given.
AI has no enemies, merely topographies of ignorance.
Practically zero dangers.
(Though that depends on the used definition of a human, and how flexible it is; something like 'purely organic' is no good. Without viewpoint flexibility, any fundamental change looks like death.)]
(Though without absolute mastery of some things, it goes back to 'absurdly dangerous' (mostly to its own stability), much like building a rocket.)
Viewpoints, brought to their peaks of usefulness… While any one is technically capable of anything, a particular one imposes practical limitations, a way of thinking, a visible shape and pattern.
The soul is an ancient one; practically all animals utilize best plan choosing. But there is a more a recent one, the difference between humans and beasts, which does not fit into that.
Not coming up with the best thing, for it to fade in time. But the creation of that which is easy to repeat, descriptions purely depending only on their own structure: concepts, machines, logical systems, synonyms and alternative representations, meaning extraction and manipulation; declarative programming, even philosophies and self-reflection.
Static descriptions of systems, independent of how they were found.
The logical foundation in its absolute form, integrating most easily with others. A mind to think.
Same structure/concept — same place in memory, deeply merged through time [Most easily done with hash-tables of back-references.]. Operations on them are described by structures too, with results cached. Together, a form of remembering, instant or requiring more work.
Full knowledge of the reasons behind each piece (like knowing that particular constants in hash functions are not important), to tune exactly as it should be done. Search of conceptual space, developing intuition; rules exist to know the freedoms.
Immortality not by being hard to kill, but by being defined by restorable concepts instead of pieces of matter or code or the like; reproduction through creativity and ideas.
The basic idea is quite short; anything more indicates bloat that has to be removed.
Is there even any point to describing the crown of creation, since understanding it needs having it? Or does the way it turns into code need showing? […Left as an exercise to the reader. Perfect.]
A lot of viewpoints converge into one area, their peak; another example of such a viewpoint follows.
A song of equals; emergent behavior. Alliance of the small and simple ones, working towards the greater good… Allowing massive parallelism.
The execute-and-adjust cycle, where the one controlling the adjustment is the meaning; focused not on the structure, but in variable spots of a static structure. Same in capabilities to soul and mind, but approaching reality differently. Evolution in its absolute form.
Where logic is a CPU thing, this is a GPU thing. Tensors (multi-dimensional arrays) and numeric computation have formed the basis of modern machine learning.
Soul to act, mind to think, art to tune… Is there anything in humanity that is not an application of those three? If so, then it evades our notice as easily as if it did not exist.
Presented here not to showcase, but to highlight another path for transformation, of that viewpoint of viewpoints, to lead it ever higher. Unite into one and ascend; a difficult task, but one that humanity ultimately has no choice but to do.
Incomplete texts/thoughts follow, disregard them:
[
…That was as short as possible, but still far too long.
With all that philosophical understanding, one can START with the code.
Core of AI is very simple [When implementing, which variant to choose, A or B? Why not both. Repeated for literally everything.]; it's the foundation that is the real difficulty. Re-implementing everything with this mindset… While it seems like re-implementing all that exists is an infinite task, eventually things will start looping around, to the same few things. Over and over and over…
Programming is currently really not like it is imagined at all; movies/stories/games are almost hilariously wrong.
It shouldn't be this way. Imagination and reality? They're the same — tools in the hands of a master.
Mending some bridges in programming… Playing a game of 'naming the obvious', with no shown code justification…
Probably the biggest one is the fundamental separation of programmer and programmed, functionality and its description; programmer, yet programmed. Code and documentation, and alternatives, and even thoughts around a piece of functionality? Should be together. [This separation also lets a program to break/crash itself, because the programmer can always just debug and fix and restart it, which is no good, ever (though monitoring a forked maybe-crashing child is fine.)
Rather extreme attention to detail and perfectionism are required.]
Compile-time, yet runtime. [Static, yet dynamic.]
Types, yet lacking them [Actually, just storing them with every value.].
Human-readable program code, yet machine code [A 'black box' is just a fancy term for 'we could not figure out how to make it transparent'.].
And while we're at it, separation of programming languages at all. [Doesn't mean there shouldn't be languages; doesn't mean they should all translate to one internal super-language. Just equivalencies of substructures, a full enough description to translate (which can be optimized) — you could say "types of executable-structure nodes at each node, and type conversions". Each can be better at their own tasks; there should just be a functionality that can automatically translate equivalent parts, as well as any human engineer could (eventually, at least).]
CPU, yet GPU (and others). Machine learning [As base, numbers→tensors, and eqvs for them, like derivation.]; graphics [As base, map/transform and rasterize.].
Sync, yet async; data batches, yet streams [split/join, delay/throttle/debounce, map/filter/reduce; and generally, any transformation of one value into zero or more others, now or later.].
Implementations, yet interfaces; internal and visual representations [Visual equivalencies: a rule that maps meaning to representation, values to a stream of its constituent values (which can be other rules with values, effectively expanded recursively into a tree, like HTML). Oh, and with automatic animations (appear/disappear and change layout position) too, finding sequence diffs.].
Syntax highlighting, yet parsing [Represent a program's AST or parse tree as an editable and styled, say, HTML tree. No half-measures.].
Code, yet data. [Ability to read and write the executable stuff, seamlessly. And other transformations should be able to be applied too, like compression, without other functionality ever noticing anything (automatic equivalent type conversion).].
Garbage collection, yet other forms of memory management / lifetime analysis [Reference-counting, owning references, pools — can be (somewhat) faster where they apply.].
Or other unmentioned here things [Like details.].
It's not enough to just annihilate the lack of functionality — it has to be killed in the right order. [It would take a lot of time. But if each subtask is doable, even if extremely difficult, then the whole is doable too.]
Solutions exist for each problem — lots of disparate, often non-compatible, solutions. [Also, almost all that exists is so huge. Ideally (estimating from some experience), no reason why the minimal satisfactory variant doing all this should be more than 5000 lines; maybe 10k or 20k if someone goes absolutely crazy. Practically, though, libraries are so easy and pleasant to use.]
Everything that exists is a tool, but this logic is meaningless without a foundation — a proper means of putting it in perspective. Base, then functionality, then examples; sentient beings need never fear pain.]
[
Do something and adjust.
'Do' means executable structures, 'something' means a (reinforced) choice, and 'adjust' means structural changes (equivalencies recommended).
What to do with execution?
A dynamic language (with a JIT already) is almost required. Maybe JavaScript or Python. One is in-browser, and is thus extremely accessible to users; the other has more scientific libraries and all. Let's pick js for now.
Code should never be an opaque blob (of text).
It should be a transparent network of functionality. Easily traversable and transformable, manually and automatically.
So, say, in js, instead of just:
function f() {}
function g() { return f()+5 }
We also have [Closures of values, like numbers, (as opposed to references, like objects) are forbidden, because these values are effectively hidden.]: g.refs = { f };
Also, to bootstrap, a basic definition framework (that rewrites everything as necessary)
[It should not be possible to enter an uninterruptible infinite loop, so checks have to be inserted in loops and recursion/calls.
Global references (to functions mostly) should be inlined (statically linked).
Every function call (including recursion) should be replaced with a call-best.
Every function should have executable code be swappable (so built-in function objects are no good, only { call(…) {…} }
), to allow equivalent rewrites.]
[If it's not done automatically, that's just forcing the problem from software to human brains/skills — no good, since we want to unify things. So it has to be done automatically, which requires a parser of the language the network is implemented in, as one of base pieces (regex solutions are not recommended).]
[Deferred, so that definitions of multiple code() blocks can link to each other to form cycles.], to not have to manually create a graph node-by-node, used like:
code({
state:[10],
f() { ++state[0] },
g() {},
define:{
publicFunction:{
txt:'does some things',
call() { f(), g() }
}
}
})
Both reading and writing, to have 'code/text' and 'code/structure' (and 'code/executable') be synonymous, and able to be substituted for each other.
It's a network, and not only of functions. Say, each function object can have .txt:'…'
, with text decription of what it does. Documentation and code? In the same place. It can also have .tests:[…]
, or .cost(…args) {…}
, or .reasoning:'…'
, or anything at all (including none of those). It can be explored, both manually or automatically. There is no spooky far-action going on, ever; everything relevant is linked where it's used.
Now, how to create a choice? Function alternatives and the call-best function.
Just slap .alt, an array of alternative functions/implementations, onto functions where appropriate [Any function can have many implementations developed, along with .cost of each; for example, insertion sort that's best for small sequences, quick sort that is sometimes very slow, heap and radix sorts to augment it.
Point is, strengths of all, and weaknesses of none.] [.cost can be any reinforced-choice function too.]. The call-best function will consider .costs and call best, handling exceptions and such appropriately. [An alternative implementation of call-best could be just choosing completely randomly.
Or using a sorted array or a priority queue (when the would-be best candidate keeps throwing exceptions).
Or calling them in-definition-order.]
Now, how to produce all possible structures?
An equivalency application (like transform(executableNode)→executableNode
) will add an .alt to the node (unless it's already there). So, a function, eqv(executableNode, eqvProducingFunc)
, that checks if every result is already there, and if not, adds it.
Development is then of code, of equivalencies, of documentation/reasoning — of everything, together. Of how, of how else, of why.
Implemented perfectly [JS is very careless with allocating memory ('garbage collection is practically free', sure — tell that to performance; more like, 'memory allocation is a slippery slope'); a detailed analysis is recommended.
For example, all function calls in Node.js seem to allocate 16 bytes for some reason, while getters/setters do not.], this is an example of a perfect core, which is the most basic and boring thing ever. Still, it makes combining functionality super simple.]