Sunday, September 08, 2024

Sham Rights Versus Real Rights

 If you would like to download a pdf of the following material, then, please go to: "Sham Rights Versus Real Rights"

1.       God has not granted Zionists the Biblical right to Palestine or surrounding lands. To whatever extent anything has been granted (and this is a contentious issue), the granting was to those (whether Jewish, Christian, Muslim, indigenous peoples, Hindu, or otherwise) who were committed to submitting themselves to God, and this is something which Zionists have never shown themselves capable of doing in any way but a self-serving manner.


2.       Sykes and Picot didn't have the right in 1916 to arbitrarily divide up the Middle East on behalf of the British, French, Italian, and Russian governments.


3.       James Balfour didn't have the right in November 1917 to promise Palestine -- in part or whole -- to the Zionists via the latter's agent, financier, and protector (Lord Rothschild).


4.       The Nazis didn't have the right to help move and transport Zionists to Palestine in the 1930s -- which was Germany's preferred solution prior to developing subsequent forms of a “solution”.


5.       The British had no rightful mandate from Palestinians -- during the period of 1920 to 1948 -- to rule over Palestine, nor did the British have a right to be so incompetent when it came to preventing Zionists from committing acts of terrorism against, and stealing property from, the Palestinian people.


6.       The Zionists -- in the form of Haganah (1920), Irgun (1931), and/or the Stern Gang (1940) -- had no right to commit terrorist acts in Palestine during the period between 1920 and 1948. The very first hi-jacking of airplanes and the first terrorist bombings in the Middle-East were conducted by members of the foregoing organizations.


7.       Meyer Lansky (head of Murder Inc., the American mob-directed assassination bureau) had no right to ship weapons to Palestine. He was merely a Zionist thug helping fellow Zionist thugs through his forte of illicit, illegal, as well as ignoble activities, and the Zionists in Palestine knew the foregoing facts because such realities were the reason why Lansky was never granted citizenship in the Zionist entity created by the U.N. .


8.       The U.N. had no right to take land away from Palestinians in order to recognize a terrorist-based, Zionist government as a separate country in 1948.


9.       The Zionists had no right to declare that the geographical area known as Palestine was: “A land without a people for a people without a land” because such a claim was never true.


10.   The U.N. had no right to apportion the majority of land in Palestine (56%) to a minority people (Zionists, most of who came from outside Palestine, constituted less than a third of the population during the act of partition in 1948).


11.   The Zionists had no right in 1947-1948 to conduct ethnic cleansing in more than 70 Palestinian cities/towns/villages driving out 750,000 Palestinians from their homes, and killing thousands in the process.


12.   AIPAC (which, in 1959, became the renamed successor of the 1954-formed American Zionist Committee for Public Affairs) has no right to operate as a political entity within the United States as long as it fails to abide by the requirements of the Foreign Agents Registration Act.


13.   Lyndon Johnson had no right to betray his country -- as both President and Commander in Chief -- when he chose to protect Zionist interests rather than to protect and to assist the American service people serving on the USS Liberty in 1967.


14.   Lyndon Johnston and James Angleton had no right to ensure that Zionists were able to illicitly secure the resources necessary to construct nuclear weapons.


15.   Zionists have no right to hold the world hostage to a threat of nuclear holocaust if such Zionists are not given what they desire.


16.   People who are acting members of any of the three branches of federal, state, or local government in America have no right to be citizens of both the United States and any other country simultaneously. This constitutes an inherent conflict of interest.


17.   With the possible exception of John Kennedy, all presidents from Truman to Biden have betrayed the rights of Americans by showing preference for the cause of Zionists over the needs of Americans. For example, the two-three billion dollars per year that has been given to support Zionists should have been directed toward re-building American infrastructure, or helping those in America who are sick, homeless, hungry, and/or jobless.


18.   Self-absorbed political narcissists such as: Joe Biden, Kamala Harris, Tim Walz, Donald Trump, J.D. Vance, and Robert Kennedy, Junior have no right to support, aide, or abet the slaughter of hundreds of thousands of innocent Palestinians, most of whom are women and children.


19.   The conflict between Palestinians and Zionists did not start on October 7, 2023. The Zionist-side of the conflict started in the late 1800s when people like Theodor Herzl (founder of the Zionist Organization) encouraged other Zionists to invade, settle, and colonize land which was not theirs, while the Palestinian side of the conflict began when the aforementioned invaders started stealing from, terrorizing, imprisoning, and killing Palestinians in the 1930s-1940s and who, as a result, were forced to exercise their right to defend themselves against a foe that was being financially supported, armed, as well as encouraged to become settler-colonialists in Palestine by, among others, America, England, France, and Germany.


20.   Zionism did not come into existence following the holocaust but arose prior to, and independently of, that set of events. However, never wishing to let a crisis go to waste, opportunistic Zionists have callously used the holocaust as a propaganda tool to further their own oppressive ends and, in the process, have misdirected attention away from the threat entailed by all forms of pathological tyranny -- including that of Zionism.


21.   Mike Johnson, Speaker of the House, had no right to invite an acknowledged war-criminal as well as a perpetrator of crimes against humanity to address the United States Congress, but he did have a fiduciary responsibility to the people of the United States (which he failed to exercise) to shield the latter from being exposed to anyone who had such a callous and malignant disregard for humanity. Moreover, one is sickened to know that there are hundreds of Representatives and Senators in the U.S. Congress who were willing to stand up 56-times to offer standing ovations indicating that they are fully on board with terrorism, murder, cruelty, theft, torture, and barbaric oppression so that the Zionist entities that have bribed those same governmental officials will be able to continue to interfere with the American political process.


22.   Zionists have no right to be anti-Semitic ... that is, Zionists have no right to harm, abuse, demonstrate bias and bigotry against, or exercise hatred toward Palestinians who -- unlike many, if not most, Zionists -- are actually a Semitic people.


23.   The leaders of Hamas had, and have, no right to place the lives, children, and property of millions of Palestinians at extreme risk in order to advance a morally questionable military and political strategy.


24.   Zionists -- especially the current Prime Minister -- helped to support, protect, and substantially finance Hamas leadership. Neither side of the foregoing destabilizing collusion had the right to actively deceive and manipulate the people of Gaza as well as people in the surrounding areas who only wanted to live in peace.


25.   England, France, Germany, Zionists, the United Nations, America, and the leaders of Hamas might have all manner of self-righteously proclaimed and dubiously obtained laws, rules, and mechanisms of power through which they conduct themselves as they like. However, those countries, organizations, and groups have never actually possessed anything but a self-delegated sense of delusional entitlement as their rationalization for oppressing and betraying the people of Palestine in the despicable manner which the aforementioned countries, organizations and groups have already done and continue to do.


26.   Miko Peled, Norman Finkelstein, Ilan PappĂ©, Dan Cohen, Max Blumenthal, Aaron MatĂ©, Katie Halper, Lee Camp, Sam Seder, Noam Chomsky, and I.F. Stone -- as well as many other Jewish individuals who could have been mentioned and who have all spoken out against the Zionist project of ethnic cleansing, terrorism, colonialism, and brutal oppression that is taking place in Palestine -- are not the “self-hating Jews” whom Zionists have tried to induce people around the world to derisively dismiss, but, rather, the previously identified individuals are people who give expression to the moral courage, critical reflection, and intellectual rigor that is exemplified in the Jewish spiritual tradition, but, unfortunately, these same qualities seem to be entirely absent from what appears to be the morally bankrupt and corrupt, political-philosophical ideology which is known as Zionism.


27.   Zionists are, and always have been, an occupying force that has become lost in the obsessive, devolutionary, hysterical lawlessness of a colonial-settler way of existence which seeks to spread its pathogenicity everywhere it goes. Consequently, as an occupying power, they have no rights under International Law to defend themselves against those Palestinians who are pursuing the latter's inherent right to be sovereign individuals.


28.   Everyone has a right to sovereignty.


29.   No one has a right to: Ethnic cleansing, torture, terrorism, collective punishment, arbitrary detentions, or behaviors that deprive people of food, shelter, water, education, health care, and the capacity to communicate freely with others about matters that adversely, if not destructively, affect one’s right to sovereignty.


30.   A third of the people being decimated in Gaza and the West Bank are Christians.


31.   Consequently, those Christians who support Zionism are endorsing the idea of an alleged “right” to slaughter and oppress their own Christian brothers and sisters.


32.   Apparently, there are some individuals who believe they have the right to place their spiritual brothers and sisters at risk due to an assumed right to poke or prod Divinity to speed up the end-of-days dynamics in order to realize their own self-serving and delusional understanding of Armageddon at the expense of others. Such a perspective seems inconsistent with the teachings of Jesus/Isa (peace be upon him).


33.   One can only shake one's head in perplexity and dismay concerning those Zionists who claim a right to be outraged and anguished with respect to the holocaust and, yet, in such hypocritical fashion, those individuals also have become inexplicably entangled in perpetrating terrorist acts against the Semitic people of Gaza and the West Bank ... acts that cannot be distinguished from the moral atrocities which occurred during the Second World War.


34.   The United States has used its veto-powers at the United Nations to protect Zionism for 76 years. One of the many flaws of the United Nations is that this latter organization has enabled the United States to use that veto power to facilitate the destruction of all Palestinian rights by offering all manner of political, financial, economic, legal, scientific, and military support to Zionism which has been used for nefarious purposes and, in the process -- as collateral damage from such irresponsible actions -- the lives of those who are sincere explorers of the Jewish spiritual tradition have been adversely affected because Zionists have tried to obfuscate the difference between Zionism and Judaism.


35.   There is a substantial distinction to be drawn between Zionism and Judaism. The former (that is, Zionism) seeks to induce people to destroy spirituality for the sake of personal, worldly gain, whereas the latter (that is, Judaism) seeks to induce people to enhance spirituality independently of, and, if necessary, at the expense of worldly gain.


36.   The meme: “From the River to the Sea, Palestinians will be free” says nothing about annihilating Zionists. Rather, the meme alludes to the right of all people – including Palestinians -- to be free from tyranny, whether Zionist-caused or caused in some other manner.


37.   Unfortunately, those who object to the foregoing meme have become consumed with a classic case of projection in which such individuals fear that others will do to Zionists what Zionists have done, and are doing, to other human beings. Fortunately, there are many people – including Palestinians – who do not suffer from the same defects of thinking, feeling, and acting which characterize those who seek to project their own faults onto others and, as a result, notwithstanding the fictional narrative which has been created by Zionists, Palestinians are pursuing nothing more than for their basic human rights to be acknowledged and realized – aspirations which Zionists have no right to deny or disparage. 

-----

The foregoing perspective is not my idiosyncratic view concerning the situation in Palestine. It is shared by many individuals who have had their hearts sincerely opened up (i.e., the quality of Ikhlas) to the spiritual influence of Jesus/Isa (peace be upon him), and such people are referred to as Isawi or followers of Jesus/Isa (peace be upon him) by the Sufis (i.e., those who pursue the mystical dimension of Islam). In addition, the foregoing thirty-seven points also resonate with the position of those who have had their hearts sincerely opened up to the spiritual influence of Moses/Musa (peace be upon him) and who are referred to, by the Sufis, as Musawi, or followers of Moses/Musa (peace be upon him). For example, in the latter case, many things were said in an interview given by Rabbi Yisroel Dovid Weiss which are consonant with what has been voiced during the previous five pages. The url for that interview is:

https://www.youtube.com/watch?v=U2H-F0HVKDY .

Interestingly enough, the foregoing 37 points were written independently of the Rabbi’s interview since I only came to find out about his talk after the foregoing link had been forwarded to me by a fellow Sufi who, after reading the 37 points, responded by suggesting that I listen to the aforementioned interview and for which he had provided me with a link through which to engage the Rabbi’s commentary.


Tuesday, June 18, 2024

Tractatus Technologicus

 If you would prefer to download a copy of Tractatus Technologicus, please click on the following link: 

Tractatus Technologicus

1.0 - This document gives expression to a data point.

1.1 - That data point has a complex internal structure that might be fractal in nature. In other words, there is – allegedly -- a pattern which might be present within the point that is being given descriptive expression through this document that is (in some hard to define manner) never ending in character.

1.2 - However, in order to determine if the foregoing statement is true, then, the one engaging this point – namely, you, the reader (another data point within a complex internal structure, possibly fractal in character) – would have to follow the alleged pattern across all levels of scale to ascertain whether, or not, there is some principle of self-similarity which ties those scales together in the form of a pattern of one kind or another.

1.3 - I have my doubts whether anyone engaging the current data point would be willing to devote the time and resources necessary to explore the possible infinite set of scales entailed by the current data point and, as a result, would be able to establish that – yes, indeed, the locus of manifestation which is being presented herein is fractal in nature. So, to make things as simple (and, simultaneously, as complex) as can be, the key to identifying the nature of the self-similar pattern manifesting itself across all scales of Being which gives expression to the internal structure of the fractal data point you are engaging is a function of a soul … mine, sort of.

1.4 - The starting point of departure for generating members of the Mandelbrot set is: Z = Z2 + C, where C is a variable in the complex plane and Z is set to zero, then, wash, rinse and repeat as many times as necessary to determine if the iteration process gives expression to bounded conditions or diverges to infinity. The values which lead to bounded conditions are members of the Mandelbrot set, and such a set can be translated into a visual pattern by assigning various qualities (such as color) to each member of that set.

1.5 - The starting point of departure for generating members of the Whitehouse set is: Soul = P ÷ (En x ∑D), where P encompasses potential, E constitutes points on the experiential plane, n is initially set at 0 (some refer to this as birth or the locus of creation or existence), and D gives expression to the dimensional variables (biological, physical, hermeneutical, epistemological, emotional, social, spiritual, moral, anomalous, temporal) that impinge on and modulate any given point in E and, as such, D generates a hyper-complex manifold that departs substantially from the complex plane entailed by the Mandelbrot set. When the foregoing function is iterated across the existential hyper-manifold, then values which are bounded by, and do not diverge from, the properties of S are members of the Whitehouse set.

1.6 - The focus of the complex data point dynamics being given expression through this document is a book by Mustafa Suleyman entitled: The Coming Wave, a complex data point dynamic of another kind.

1.7 - Having gone through the network of data points in the aforementioned book, one of the first thoughts that bubbles to the surface of consciousness to which the Whitehouse manifold gives expression is that the author of the aforementioned book alludes to the presence of elements within a knowledge base that, supposedly, are in his possession, yet seem, at least in certain respects, quite superficial in character – possibly fictional or delusional -- rather than being deeply epistemological in nature.

1.71 - For example, he talks, to varying degrees, about: Viruses, COVID-19, HIV/AIDS, global warming, evolution, medicine, pharmaceuticals, biology, cognition, and vaccines, but the manner in which he discusses those issues in his book suggests he doesn’t necessarily know all that much with respect to those topics. Instead, what he says appears to be based on a process in which the ideas of other people merely have been incorporated into his hermeneutical framework rather than being a function of his own rigorous process of investigation and critical reflection.

1.72 - The foregoing comments are a function of a set of accumulated experiences covering hours of reading, listening, watching, thinking, and writing. Some of the experiential considerations that are being alluded to have been captured in a series of books: [(1) Toxic Knowledge; (2) Follow the What? - An Introduction; (3) Observations Concerning My Encounter with COVID-19(?); (4) Evolution Unredacted; (5) Varieties of Psychological Inquiry – Volumes I and II; (6) Science and Evolution: An Alternative Perspective; (6) Sovereignty and the Constitution; and one 39-page article: (7) Climate Delusion Syndrome].

1.73 - No claim is being made that what is said in the foregoing books is true. Nonetheless, a body of material is being presented in those works which tends to indicate a fundamental familiarity with the aforementioned issues that does not appear to be in evidence within The Coming Wave despite the latter’s employment of terminology which might suggest otherwise.

1.74 - The foregoing considerations present me with a problem. A lot of reputable individuals have praised his book, and, yet, none of them have indicated that there might be a certain degree of disconnection between what the author of The Coming Wave claims to know and what he actually knows, so, what is one to make of such praise sans criticism?

1.75 - Maybe all of the individuals who have offered their praise concerning that book share the same sort of seeming shallowness concerning the aforementioned list of topics. Alternatively, perhaps they all are prepared – each for his, her, or their own reasons – to encourage the framing of such issues in ways that are similar to what the author of The Coming Wave has done, and this has become such an ubiquitous, embedded, vested interest dimension of their conceptual landscape that they no longer pay attention to the many problems which pervade such issues.

1.76 - At one point in The Coming Wave, a short-coming of earlier renditions of large language models is touched upon. More specifically, such LLMs often contained racist elements.

1.761 - Such racist elements are present in those LLMs is because the large collection of human texts that were used to train the LLMs contained racist perspectives. These elements became incorporated into the LLMs -- through ways both obvious and less obvious – so that when the LLMs were queried by human beings, the responses provided by the LLM (sometimes more blatantly than at other times) gave expression to a racist orientation.

1.77 - Human beings are like LLMs in as much as the algorithms at work in each context are, in part, trained in accordance with the verbal and written language samples to which they are exposed. Perhaps, like LLMs, human beings incorporate elements of linguistic texts into their inner dynamics that carry biases of one sort of another during the course of picking up various dimensions of language.

1.771 - If so, then, the foregoing considerations might account for why there seem to be so many elements of apparent bias concerning the aforementioned list of topics which are present during the course of The Coming Wave. Moreover, perhaps this is the reason why the presence of such apparent biases in that book are not commented on by those who are praising that work because the ones full of praise also have been exposed to, and (knowingly or unknowingly) have incorporated into themselves, such biases while being exposed to various kinds of texts, spoken and written.

1.7712 - Steps have been taken to de-bias LLMs. Although a complicated process, this dynamic is easier to accomplish with LLMs because – to date (perhaps) -- they have not been given the capacity to resist such corrective measures. However, this sort of process is much more difficult to accomplish in human beings because the latter individuals have so many ways of resisting, ignoring, or evading those sorts of attempts.

1.78 - Is Mustafa Suleyman a smart guy? Yes! Is he a talented person? Yes! Is he a successful individual? Yes! Is he a wealthy man? I haven’t seen his bank account or financial portfolio, but I believe the answer is: Yes! Does he have a strong entrepreneurial spirit? Yes – several times over? Does he understand artificial intelligence? More than most do.

 1.79 - Does he understand the nature of the problem that is facing humanity? I am inclined to hedge my bets here and say: Yes and no.

1.791 - One of the reasons for saying “no” to the foregoing question is that despite his outlining ten steps (which will be explored somewhat toward the latter part of the present document) that are intended to free up temporal, institutional, corporate, and intellectual space which might assist human beings to cope, in limited ways, with what is transpiring, I don’t believe his book actually offers much insight into what a real solution would look like or what the actual nature of the problem is.

 1.80 - For example, the title of his book – The Coming Wave -- is problematic. What is allegedly coming has not been coming for quite some time. In fact, that wave has been washing over humanity for many decades.

1.81 - The notion of “emergent technology” is just a technique employed by the establishment (both surface and deep) to try to cover up what already has been taking place for years and is a phrase that is often used as a herding technique to push, or pull, the public in one direction or another. Thus, more than sixty years ago we have someone like Dwight Eisenhower warning about the Military-Industrial complex – a complex which he was instrumental in helping to establish.

1.82 – Alternatively, one might consider the thousands, if not millions, of Targeted Individuals who, years ago, were incorporated into AI-controlled torture protocols involving, among other things, autonomous chatter boxes. The so-called Havana Syndrome is just the tip of research and deployment icebergs that have been set adrift by governments and corporations around the world, including the United States (Take a look at the work of, among others,: Nick Begich, Robert Duncan, and Sabrina Wallace).

1.821 - Advanced AI technology – for example, Lavender – already is being used in military and policing projects in Israel. AI also is being actively used by the Pentagon’s updating of Palantir’s Project Maven system, and one might note that Department of Defense directive 3000.09 concerns the use of autonomous, AI-based weapons systems.

1.8211 - Blackrock has been employing Aladdin for a number of years. Aladdin stands for: Asset, Liability and Debt and Derivative Investment Network, and is an AI system that oversees risk management on behalf of its employer. Human traders are a disappearing breed in New York, Chicago, London, and elsewhere

1.822 - Moreover, Directed Energy Weapons are not limited to the special effects of movie productions. All one has to do is take a look at the evidence from places like Santa Rosa, California or Paradise, California or Lahaina, Hawaii and listen to arboreal forensic expert Robert Brame to understand that such “emergent technology” has already emerged.

1.823 - Synthetic biology is not coming. It is already here and has been walking amongst us, so to speak, for several decades as the work of Clifford Carnicom has demonstrated … work that has been confirmed, and expanded upon, through the scientific investigations of individuals such as Ana Mihalcea, David Nixon, and Mateo Taylor.

1.824 – To create droughts, hurricanes, tornadoes, polar vortices, biblical-like rains, floods, and blizzards all one has to do is combine: Water vapor from cooling towers with the heavy metals present in chemtrails, and, then, apply heterodyned energy-pulsations from Nexrad Doppler weather radar stations. Considerable evidence for the foregoing has been available for more than a decade.

1.825 – Aman Jabbi, Mark Steele, Arthur Firstenberg, and Olle Johansson (there are many others who could be included in this list) – each in his own way – have been trying to draw the public’s attention to the many weapons, surveillance, AI systems, or different forms of technology which are, and have been for some time, operational and are being continuously upgraded with human beings as their primary targets

1.8251 – Yet, neither Mustafa Suleyman nor any of his admirers have mentioned the foregoing data points. Suleyman and his admirers appear to be people who are either: Woefully and cataclysmically ignorant of such matters, or they are quite knowledgeable about those issues and are playing apocalyptically dumb, and, in either case, their pronouncements concerning technology and what to do are highly suspect.  

1.9 - Fairly early in Suleyman’s book, the term “Luddite” is introduced and, then, mentioned several more times over the next 20-30 pages. Each of those references is ensconced in a relatively negative context.

1.91 – For example, initially, the term: “Luddite reaction,” is referenced. Supposedly, this consists of boycotts, moratoriums, or bans.

1.92 - Mustafa Suleyman goes on to indicate that due to the commercial value and geopolitical importance of technology, the foregoing kinds of activities are unlikely to succeed. After all, corporations and nation-states both tend to soar on the wings of the leveraged power that are provided through technology.

1.921 - One wonders why only the concerns of corporations and nation-states are considered to be of importance. Clearly, what seems to be of value to Suleyman is a function of power (financial, legal, and/or militaristic) which is being wielded by arbitrary hierarchies that cannot necessarily justify their activities and, therefore, often tend to resort to various forms of violence (financial, political, educational, social, physical, medical, legal, religious, economic, and martial) to maintain their existence.

1.93 – Said in another way, what he does not acknowledge is that both corporations and nation-states are, in effect, omni-use technologies. Consequently, one should not be surprised when those sorts of omni-use technologies partner with various more-narrowly focused technologies in order to enhance their respective spheres of influence and power while discounting the concerns being expressed by billions of human beings.

1.94 - What is technology?

1.941 - Technology involves a process of conceiving, developing, and applying conceptual understanding or knowledge in order to realize goals in a manner that can be replicated across a variety of contexts.

1.942 - Another way of describing technology is to speak in terms of tools. More specifically, technology concerns the creation of tools that can be used to provide practical solutions in relation to various kinds of problems.

1.943 - Additionally, technology can be considered to consist of a series or set of proficient techniques and protocols which can be used to address and resolve various problems in a practical way.

1.944 - The terms: “conceptual knowledge,” “tools,” and “techniques” which appear in the foregoing characterizations of technology are all assumed to give expression to one, or another, form of scientific, mathematical, and/or technical proficiency. Furthermore, the notion of “practicality” is usually code for: ‘efficient,’ ‘affordable,’ ‘profitable,’ ‘effective,’ and ‘politically feasible.’

1.945 - One might pause at this point to ponder on why “efficiency” rather than, say, truth, justice, character, or essential human potential is deemed to be a fundamental consideration in pursuing technological issues. Similarly, one might ponder on why the alleged meanings of: “effective”, “profitable”, “affordable”, and “politically feasible” are based on criteria provided by corporations and nation-states which have substantial conflicts of interests in those matters.

1.9456 - Corporations use governments as tools in order to solve many of their problems in a practical manner, just as governments use corporations as tools to solve many of their problems in what is considered a practical manner. The East India Corporation in England is a perfect example of such a mutually beneficial form of power mongering.

1.9457 - Blackrock, Vanguard, State Street Bang, Google, Amazon, Meta, Apple, the Bill and Melinda Gates Foundation, Elon Musk, the Open Society of George and Alex Soros, The Clinton Foundation, the private banking system, pharmaceutical companies, any number of media companies, and so on, all benefit from the legacy established through the Supreme Court in cases such as: the Dartmouth v. Woodward 1819 case, or the headnotes of the 1886 Santa Clara County v. Southern Pacific Railroad case, or the 2010 proceedings involving Citizens United v. Federal Election Commission, or the 2014 Burwell v. Hobby Lobby Stores decision.

1.946 - What has been acknowledged by the legal system to be a legal fiction – namely, that corporations are persons – is being utilized (by government, the legal system, and corporations) as an oppressive weapon against actual real, non-fictional human persons. For instance, the 13th amendment has been used by corporations to, among other things, exploit incarcerated human beings as sources of profit, and the 14th amendment has been used to protect the invented rights of phantom corporate personhood more than it has been used to protect the Constitutional rights of actual human beings.

1.947 - The American Revolution was fought as much against the East India Company as it was fought against the English monarchy. Yet, despite the existence of a general sense among the so-called ‘Founding Fathers’ and the generality of colonists that the notion of a corporation was a vile anathema, nonetheless, here we are today being bullied by institutions that are without Constitutional authority but, unfortunately enjoy the illicit largesse of jurists -- such as John Marshall -- who were corporate friendly and, therefore, those entities came to be treated as persons on the basis of a legal fiction and, as a result, have been unshackled from the constraints (permissions, purposes, and temporality) present in the charters which were supposed to govern their limited existence.

1.948 - Using tools in a technically proficient manner that is intended to solve problems in a practical manner and, thereby, realize goals which are considered to be important is a form of technology. The notion of “legal fiction” was a tool that enabled the technology known as “the rule of law” to carry on in an unconstitutional manner to the detriment of human beings.

1.9481 Legality and constitutionality are not necessarily synonymous terms. Although constitutionality is the more fundamental concept, legality is what tends to govern society.

1.949 - The technical proficiency referred to above can involve: Law, politics, psychology, business, sociology, philosophy, religion, education, the media, the military, policing, public health, and medicine. Thus, as indicated previously, legal fictions are a tool of law; meaningless elections and conformity-inducing policies are tools of politics; undue influence is a tool of psychology; advertising, marketing, and induced consumption are tools of business; normative behavior is a tool of sociology; arbitrary forms of logic are tools of philosophy; places of worship are tools of religion; teachers and/or textbooks are tools of education; biased, corruptible reporters are tools of the media; threats, lethal force, and oppressive forms of self-serving tactics or strategies are tools of the military; intimidation is a tool of policing; unverifiable theories are tools of public health, and problematic diagnoses as well as synthetic pharmaceutics with an array of “side-effects” are tools of medicine.

1.9450 - What is considered practical is whatever serves the interest of those in power. Everything else is impractical.

1.9451 - The attempts of human beings to ban, impose moratoriums on, boycott technology are deemed to be impractical by the author of The Coming Wave because they do not serve his interests or the interests which he deems to be of value. Thus, the “Luddite reaction” of bans, boycotts, and moratoriums are impractical. The force behind the green screen of Oz has spoken.

1.9452 - Suleyman also refers to Luddites as individuals who “violently rejected” new technology. They were people who were prepared to dismantle technology if peaceful measures failed.

1.9453 - Corporations and governments are entities which are prepared to dismantle people and communities if the latter do not respond to arbitrary oppression in a peaceful manner. This, of course, is an exercise in the “rule of law” rather than violence.

1.9454 - Practicality is established through the rule of law. Whoever rejects such practical, legal measures is, by definition, outside the law and serving impractical ends.

1.9455 – Right or wrong, the Luddites were violent toward technology, not people. However, corporations and governments are – quite apart from considerations of right and wrong - violent toward human beings but not toward technology because technology serves the purposes of corporations and governments whereas resistant, non-compliant human beings do not serve those purposes, and, therefore, need to be dealt with through the “rule of law – one of the metrics which corporations and governments use to determine the nature of practicality.

1.9456 - According to the author of The Coming Wave, the resistant, aspirations of Luddite-like individuals are doomed because whenever demand exists, technology will find a way to serve that demand. When Edmund Cartwright invented the power loom in 1785, the only demand for such a device was that which was entailed by the inventor’s activities as well as that which was present in those few individuals who saw the possibility of a power loom as a tool for making additional profits irrespective of what such a means of making profits might do to people in general.

1.9457 - Technology is not a response to the demands of the generality of people. Technology is an engineering process through which demands are generated concerning entities about which people had no knowledge until the perpetrators of a given form of technology applied various tools involving politics, law, education, finance, economics, and the media to announce its presence.

1.9458 - Technologies shape the landscape out of which demand emerges. Choice is shaped by the presence of those technologies.

1.9459 - An estimated 6000 workers publically demonstrated in 1807 in relation to the pay cuts which were imposed on them as a result of the power looms that were being installed in various factories. Using the technology of lethality, the guardians of such weaponry killed a protestor.

1.94591 - Public demonstrations that caused no deaths are labeled as violent. Yet, a tool that is used to protect the interests of technology is used violently, and this is considered to be but the application of a tool of technology known as the ‘rule of law’.

1.94592 - The Luddites wait another four years before descending into the violent process of writing a letter of protest to a mill owner in Nottingham. The mill owner ignores the letter, and, as a result, property is destroyed but the mill owner is left untouched … presumably in a display of non-violent violence.

1.94593 - Over the next several months, hundreds of loom frames are destroyed by the Ned Ludd led Luddites. Nonetheless, using – apparently -- some form of stealth technology, the mill owners all escape injury or death.

1.95 - In the very last chapter of The Coming Wave – some 240 pages following the pairing of the term: “Luddite” with violence, failure, and impracticality -- the author indicates that the Luddites were interested in: (1) Being treated with dignity in the work place (2) being given a fair day’s wage for a fair day’s work; (3) being afforded some time and consideration by the owners with respect to the challenges encompassed by a changing set of work conditions; and, (4) engaging in a discussion about the possibility of entering into some sort of profit-sharing arrangement with the owners.

1.951 - All of the foregoing conditions were ignored and denied by the owners. The owners didn’t care about the workers or their families. They didn’t care if the workers ate or starved. They didn’t care if the workers had a place to live or not. The owners didn’t care if the workers or the families of the workers lived or died. The owners were not interested in sharing anything with anybody who was not an owner, and, very likely, not even then.

1.9511 – Although there have been a few exceptions, owners have rarely appreciated a perspective that was voiced by Abraham Lincoln but, in fact, has been understood for millennia by millions. More specifically, capital is only brought to fruition through labor, and, as such, labor has priority over capital … in fact, human labor, human skills, human talent, human character, human intelligence, human commitment,  is the primary form of capital, and the financial form of capital has always sought to obfuscate and, where possible, degrade that truth. It is the story of Cain and Abel played out again and again

1.9512 – Technology has always been used by those with power to dominate and/or subdue and/or control or diminish the activities of labor. Technology is a dynamic limit which tends toward an upper value of removing most of humanity from the equations of life.

1.952 - To a considerable degree, the years of conflict and tension which ensued from the introduction of the power loom were caused by, or exacerbated by, the intransigent, selfish, self-serving, greedy, overbearing, unyielding, oppressive lack of compassion of the owners toward their workers or toward the workers who had become unemployed as a result of the introduction of a new form of technology. Although the power loom meant that economic difficulties of various kinds would be entering into the lives of the workers, the workers were not necessarily irreconcilably opposed to the introduction of a new technology provided that the workers would be treated with dignity during the transition.

1.953 - The hopes, desires, and needs of the workers, and their families, were trampled upon. Instead of honorable, negotiated accommodations, the workers were met with an array of new laws which were punitive and oppressive and, as well, the workers were met with technologies of control in the form of policing, militia, and legal tools, and as a result an array of technologies were imposed on the workers, their families, and their communities beyond that of the power loom.

1.954 - Suleyman peacefully puts all of the foregoing considerations aside and indicates that decades later there were incredible improvements in living standards being enjoyed by the descendents of the foregoing workers. What the author of The Coming Wave seems to fail to consider, however, is that there was absolutely no reason for decades to have been lost before such living standards improved.

1.955 - All of the foregoing results could have been accomplished prior to the time of the original demonstrations in 1807 and shortly after the time of the 1785 invention of the power loom. Unfortunately, owners used the technologies and tools of government, law, policing, banks, the media, religion, and the military to ensure that workers would not be treated with dignity, decency, compassion, or intelligence.

1.956 - This is the sort of “progress” which technology brings. These are the technologies which have been used across all forms of industrial revolution to oppress the people and force them to adapt in the ways in which the overlords of technology desired.

1.9561 - Workers didn’t choose to adapt. They were forced to adapt, and technology generated the tools (in the form of law, education, religion, policing, banks, the media, and so on) through which such “progress” was violently imposed on communities irrespective of the actual, essential needs of human beings.

1.9562 - Throughout the pages of The Coming Wave, the author alludes, again and again, to the idea of seeking solutions to the challenge of technology which are done in a manner such that benefits are more plentiful than any harms which might ensue from human inventiveness. However, nowhere in the aforementioned book does one come across any discussion concerning the nature of the metric that is to be used for determining what the criteria are which are to be used in evaluating what the benefits and harms of a given instance of technology might be.

1.95621 - On occasion, the author of The Coming Wave seems to believe that as long as benefits outweigh the harms, then, perhaps, this is the most for which we can hope. Aside from questioning the propriety of reducing the rest of humanity’s hopes to the hopes of the author, one might also question the way in which, apparently, the metric for evaluating our situation should be some form of utilitarian argument that begins at no justifiable beginning and works toward no defensible end.

1.957 - There are two broad approaches to the issue of utilitarianism. One is quantitative and the other is qualitative.

1.9571 - Irrespective of which branch of utilitarianism one chooses to pursue, the process is entirely arbitrary. This is because there is no absolute, undeniable, all-are-agreed-upon starting point through which a person can justify one set of utilitarian criteria over some other set of utilitarian criteria. Consequently, regardless of how one proceeds, the choices are arbitrary especially when such choices are imposed on other people without the informed consent of the latter.

1.9572 - Imposing solutions on people without informed consent tends to be the default position for most forms of governance. This is considered to be an exercise in the technology of practicality because oppression seems to be a less complicated way of doing things relative to an alternative which requires one to engage human beings in all of their nuanced complexities and provide those people with veto power in conjunction with alleged solutions that are devoid of properties of informed consent.

1.958 - Does having: Food to eat, a place to live, appliances to use, medical care when needed, educational opportunities through which to learn, a system for participating in government, as well as a career path to pursue, constitute a set of benefits? Wouldn’t the answer to such a question depend on: The quality of the food at one’s disposal; the quality of one’s living conditions; the quality of the community in which one lives; the nature of the hazards or harms which might be associated with the appliances one uses; the effectiveness and risks entailed by the available medical treatment; the quality of the purposes, practices, and conditions to which a given form of education gives expression; the extent and ways in which one is enabled to participate in governance, as well as the degree of meaningfulness, satisfaction, and value which might be present in a given career or job?

1.9581 - When the food which is available for eating is nutritionally questionable if not poisonous, and the places in which we live are replete with toxic influences, and medical care is the leading cause of death, and education is about inducing one to exchange one’s essential nature for empty theories, and government constitutes a set of controlling, abusive, corrupting technologies, and careers often give expression to the logistics of selling one’s soul, then, where is the progress? A series of exercises in the dynamics of willful blindness are necessary to ignore, or merely comply with, the systemic rot which has grabbed hold of many facets of alleged civilization over the last twenty centuries, or more.

1.95811 - How does one parse benefits? How does one parse harms? How does one weigh the former against the latter?

1.9582 - Does technology automatically render such questions easier to answer? Or, does technology constitute an obfuscating series of proprietary complexities in which society has become entangled, much like flies become prisoners of the web’s that, initially, seemed to be so opportunistically inviting?

1.9583 - Once upon a time, people knew how to grow food, can and preserve edibles, sew, fashion their own tools, build a house, make their own clothes, construct furniture, and survive in the wild. As is true in all manner of activities, some individuals were better at such things than others were, and, to be sure, there were difficulties, problems, and limits surrounding the development and execution of those sorts of skills, but, for the most part, one of the prominent characteristics of many so-called technically-oriented societies is that technology has dumbed down most people in locations where such technology has taken hold as far as the foregoing list of skills is concerned.

1.95831 - We are one Carrington event (natural or artificial) away from creating conditions in which very few people will be able to survive. This is because we have enabled technology to seduce us into abandoning what is essential to being human and, in the process, adopting what is artificial, synthetic, and debilitating to human potential.

1.9584 - The situation of many of us today is akin to the Eloi of H.G. Well’s 1895 novel: The Time Machine. One does not have to characterize technology as a product of some sort of evil spawn of Morlocks in order to appreciate that technology has induced most people to become dependent on technology rather than becoming reliant on what God has given them in the form of their own gifts and capabilities.

1.95841 - Development and maturation used to mean learning how to unpack what is present within one. Now, development and maturation are a function of learning how to transition from one kind of technology to another form of technology.

1.95842 - Perhaps, just as physical skills have been lost to technology, so too, cognitive skills are becoming lost to artificial intelligence. The maxim “use it or lose it” does not necessarily apply just to the physical realm.

1.96 - Nowhere in The Coming Wave does the author explore what it means to be a human being. What are we? What is our potential? What are our obligations, if any, to the life we inhabit or to the life which inhabits us?

1.961 - The author of The Coming Wave cannot account for the origins of consciousness, logic, reason, intelligence, insight, creativity, talent, wisdom, language, or the biofield. He alludes to some evolutionary dynamic as being the source of such capabilities, but all he ever does when using the e-word is to assume his conclusions without ever providing a detailed account of how any of the foregoing capabilities arose or came to possess the degrees of freedom, as well as constraints, which might be present in human potential.

1.97 - All intelligence in AI is derivative. In other words, whatever intelligence is present in AI comes from what is placed in those dynamics by human beings.

1.971 - When Gary Kasparov competed a second time during a chess challenge against IBM’s Deep Blue, he became upset when a move made by the machine seemed to have unexpected human qualities and, as a result, he began to suspect that he might be playing against one or more humans rather than a machine. What he did not seem to understand was that he had been playing (both the first time around when he won and the second time when he lost) against one or more humans because the capabilities that had been bestowed on the machine he was playing came from human beings who had equipped the machine with all manner of computational systems for analyzing, evaluating, and applying heuristics of one kind or another to the game of chess.

1.9712 - There was a ghost – or a number of them -- in the machine, and, therefore, Kasparov shouldn’t have been surprised if a human-like quality surfaced at various points during the course of play. What did he think the machine was contributing to the competition entirely on its own?

1.972 - The combinatorics, computational properties, algorithms, transformational possibilities, equations, operators, as well as the capacities to integrate, differentiate, learn, parse, map, model, and develop that are present in AI systems are all a function of human intelligence. An AI system might be given the capacity to generate a variety of attractor basins or networks and invest those structures or networks with different properties or an AI system might be given the potential to re-order the foregoing capabilities in different sequences with different kinds of interactional dimensions, but those modulating combinatorics, or the potential for such capacities has come, from the intelligence of one or more human beings.

1.973 - Can such systems come up with new ways of engaging issues or generate novel re-workings of various scenarios? Sure they can, but whatever newness emerges is only possible because of what human intelligence has given such systems the capacity to do in relation to the generation of novelty.

1.9731 - Is it possible that the human beings who are constructing such dynamic capabilities are not aware of the possibilities which inadvertently or unintentionally have been built into those systems? Yes, it is, and, indeed, increasingly, technology has become like a black box chaotic attractor – or set of such attractors – that possess determinate dynamics even as those dynamics lead to unpredictable outcomes.

1.9732 - As Mustafa Suleyman notes in his book, a mystifying, if not worrying, dimension of certain kinds of, for example, AI technology is that its creators don’t necessarily understand why a system or network exhibited one kind of decision rather than another. In other words, the creators don’t understand the possibilities which they have instantiated into a given machine, network, or system.

1.97321 - For example, Suleyman talks about a Go move by AlphaGo which has become famous within AI and Go circles and is referred to as “move number 37.” The move took place in a game against Lee Sedol (a Go version – in several ways -- of Gary Kasparov) which on the surface appeared to be a losing move and seemed to make no strategic or tactical sense, but turned out to be a tipping point in the game, and, yet, no one (including the expert commentators) could understand why the move was being made or why it was being made at the time it took place.

1.97322 - A machine or system – including AlphaGo -- is not doing something new on its own. Rather, dimensions of the capabilities which have been invested in the machine or system and about which the creators were unaware are becoming manifest.

1.97323 - This is not emergent behavior. This is a failure of the creators to properly vet their creation and thoroughly understand the possibilities and flaws which are present in what they have done.

1.97324 - In other words, the system, network, or machine had been created with certain vulnerabilities. In addition, the creators also enabled the machine, network, or system to exploit or engage such vulnerabilities, and, not surprisingly, this has the capacity to lead to unforeseen results.

1.974 - In response to such considerations, cautionary tales have been written -- to which technologists and many scientists rarely pay much sincere or engage with critically reflective attention -- such as (to name but a few): Faust – Parts 1 and 2 by Johann von Goethe (1773 – 1831); Frankenstein by Mary Shelley (1818); The Time Machine by H. G. Wells (1895); Brave New World by Aldous Huxley (1932); 1984 by George Orwell (1949); The Foundation Trilogy by Isaac Asimov (1942-1953); The Technological Society by Jacques Ellul (1954); Colossus by Dennis Feltham Jones (1966); 2001: A Space Odyssey by Arthur C. Clarke (1968); Do Adroids Dream of Electric Sheep by Philip K. Dick (1968,); The Terminal Man (1972) or Jurassic Park (1990) by Michael Crichton; The Terminator by James Cameron and Gale Ann Hurd (1984); as well as Prometheus by Jon Spaihts and Damon Lindelof (2009-2011).

1.9741 - There have been over two hundred years worth of cautionary tales concerning such matters. However, notwithstanding the many amazing accomplishments of technologists, engineers, and scientists, nonetheless, such individuals sometimes seem to believe that they are smarter and wiser than they actually are.

1.975 – Mustafa Suleyman has written a book which, for several hundred pages, explores the problems which he believes surround and permeate the issue of containing technology as if, somehow, that topic is sort of a recently surfacing emergent phenomenon … something that -- based on initial, apparently quite superficial considerations -- one couldn’t possibly suspect might harbor difficulties that, subsequently, are becoming manifest. Yet, for quite some time, human beings have been aware of the problems that technology: Has created, is creating, and will continue to create, but since that understanding tends to be something of an inconvenient truth, technologists, scientists, and engineers just continue to do what they have always done – focus on solving whatever the technical problems might be in which they have an interest while, for the most part, ignoring the possible implications of those very activities.

2.0 - Let us assume that we have a machine that can pass a Turing test -- that is, one which is capable of displaying qualities that a human observer could not detect as being the product of machine dynamics rather than human cognition. Does this demonstrate that the machine is intelligent or does it demonstrate that the human beings who built the machine are sufficiently intelligent and talented to create a system which has been provided with an ample set of protocols, logic gates, algorithms, data-processing capabilities, computational facilities, sensing devices, and the like to be able to establish a form of modeling or simulation or set of neural networks that is capable of learning new things and altering its modeling or simulation or neural network activity to reflect that learning and, thereby, do what its creator or creators want it to be able to do?

2.1 - What is intelligence? Is exhibiting behavior that is intelligent necessarily the same thing as being intelligent?

2.12 - Is intelligence the same thing as sentience? Is a machine that can pass a Turing test necessarily sentient?

2.13 - B.F. Skinner showed that one could train pigeons and other animals to exhibit intricate sequences of behavior and accomplish tasks of one kind or another. Those subjects had sufficient capacities for learning to enable them – when properly reinforced -- to be trained or to undergo processes of behavior modification that exhibited considerable nuanced complexity.

2.14 - Was such modulated behavior intelligent or was it the training process which shaped that behavior which actually demonstrated the presence of intelligence? A pigeon comes equipped with a capacity to learn, but a machine has to be given its capacity to learn by human beings who have instantiated certain qualities into the machine that enable learning of different kinds to take place.

2.141 - A pigeon learns according to its capacity for being reinforced in one way rather than another. Based on the physiological and biological properties or characteristics of the entity that is being subjected to a form of behavior modification, then once something (say food or an electrical stimulation of some kind) becomes accepted or acknowledged as a source of inducement, then, it is the pattern of induced reinforcement which shapes learning rather than some indigenous form of intelligence

2.1412 - The pigeon does not produce that pattern, but, rather, responds to its presence, and it is this responsiveness which is being used as leverage to alter behavior. This is frequency following behavior because the behavior follows (is shaped by) the frequency characteristics of the reinforcement process.

2.15 - Machine learning and neural networks do not constitute blank slates. There are processing weights – sometimes quite simple but sometimes more complex – that have been built into those systems which establish the rules or principles for being able to proceed in different quantitative and qualitative ways and which characterize the capacity of the system to grow or expand or develop in complexity over time.

2.151 - Those processing weights, rules, protocols, and the like are comparable to the biological and physiological properties that enable a pigeon to be trained. Consequently, machines can be equipped to be trained, and, as a result, the behavioral characteristics of the system or network can be modified in ways that seem intelligent but all that is taking place is that the machine’s capacity for being trainable (i.e., its capacity to learn) is being put on display and shaped in ways that appear intelligent, but, like the pigeon, are nothing more than a capacity for trainability being developed in different directions according to patterns that originate from without (i.e., in the guise of the researcher) rather than being indigenous to the entity being trained.

2.152 - If the machine is trained to generate protocols that enable it to go about modifying its own behavior, this is still not intelligent behavior. Rather, the intelligence is present in the protocols that underlay the system’s capacity to be able to train itself, and although like pigeons, extraordinary forms of behavior can be shaped, nevertheless, that behavior is the product of a basic capacity for trainability being pushed or pulled in different directions by the presence of protocols, algorithms, and so on that come from without the system (whether one is talking about pigeons or machines.)

2.153 - Pigeons don’t naturally display the behavioral patterns which they are induced to adopt through the modification protocols to which they are introduced by a researcher. Those patterns of reinforcement have to be given to them in order for the pigeon’s capacity to be trained to become activated.

2.154 - Is the pigeon aware of the nature of the behavior modification that is taking place? Does the pigeon have any insight into the character of those modifications? What is the nature of the phenomenology that takes place in conjunction with the form of behavior modification which is being experienced by the pigeon?

2.155 - Perhaps, there are memories of the individual triggering cues that give rise to different stages in the chain or sequence of behaviors that have been learned? Or, maybe there are memories of the series of rewards or reinforcements that occurred during the process of behavior modification.

2.156 - However, was the pigeon aware that its behavior was being modified? Or, was the pigeon aware with respect to how its behavior was being modified as it was modified or was it aware of what the significance of that modification might have been?

2.157 - We’ll probably never know. However, one could suppose that the primary focus of the pigeon’s phenomenology had to do with the presence of a sequence of reinforcements. Conceivably, the pigeon went -- and was aware to some extent of – wherever the process of reinforcements took it, but everything else might have been just background even as changes in behavior began to take place.

 2.158 – In other words, the reinforcements or rewards might have been the center of attention of the pigeon’s phenomenology. The particular character of the changes which were occurring in conjunction with those reinforcements might have been of peripheral, or passing – even forgettable -- phenomenological interest. The pigeon might have been aware of the parts that led to the whole (the complex set of behaviors that gave expression to a nuance form of behavior) but might not necessarily have been aware of the significance or character of the whole sequence of behaviors taken as a complex form of behavior.

2.159 - In order for machines to be able to exhibit qualities that might be referred to as constituting instances of artificial intelligence, they have to be given the capacity to learn or be trainable. They also have to be given the protocols which will activate that potential for trainability.

2.1591 - Or, alternatively, such machines will have to be given the protocols which enable the machine or system to self-activate that potential itself based on the decision-tree protocols with which it has been equipped or protocols that can be modified according to other capabilities the machine has been given.

2.16 - Can machines be enabled to learn or be trained and, then, enabled to act on that learning and training? Yes, they can, but this doesn’t make them intelligent.

2.161 - Data-processing speeds, parallel-processing capabilities, computational powers, heuristic algorithms, and read/write memory storage can make an outcome look intelligent. However, the machine has no more to do with the intelligence being detected in its productions than a pigeon is responsible for generating the character of the complex behaviors that are made possible through a carefully planned reinforcement schedule.

2.1612 - One of the differences between a pigeon and an AI system is that unlike the latter, the pigeon comes to its tasks with a ready-made, inherent capacity to learn or be trained so that its behavior can be modified in certain non-natural ways, whereas AI systems have to be provided with such capabilities.

2.162 - Depending on the capabilities AI systems are given by their handlers, such systems could become quite destructive. In effect, this means that if the handlers are not careful how they construct those machines or if those individuals intentionally construct their machines in certain ways with malice aforethought, then, the machine doesn’t have to have intelligence to be able to learn how to refine its modalities of sensing, surveilling, acquiring, and eliminating targets – all it does, like the pigeon, is operate within the parameters of its training or capacity for behavior modification with which it has been provided by its handlers.

2.163 - What of the phenomenological experience of the machine? Is there any?

2.1631 - This is one of the questions which Philip Dick was raising in his 1968 novel: Do Adroids Dream of Electric Sheep? This issue became a guiding inspiration for the 1982 Blade Runner film.

2.1632 - Some theorists believe that sentience is an emergent property which arises when a data-processing system reaches a certain level of complexity. Nonetheless, until someone proves that sentience or awareness is an emergent property (and how one would ascertain that such is the case becomes an interesting challenge in itself), then, the foregoing idea that sentience is an emergent property of certain kinds of complexity remains only a theory or a premise for an interesting exercise in science fiction.

2.164 - The capacity to learn or be trained does not necessarily require sentience or phenomenology to be present in order for learning to take place because some forms of learning can be reduced to being nothing more than a process of changing the degrees of freedom and degrees of constraint of a given system. (Eric Kandel received a Nobel Prize for showing that Aplysia – sea slugs – “learned” through changes in synaptic connections.) Alternatively, to whatever extent sentience of some kind is present – such as, perhaps, in the case of a pigeon – that the form of sentience doesn’t necessarily require any reflexive awareness concerning the significance of what is transpiring peripherally (the ground) in relation to the process of reinforcement (the figure).

2.1641 - The author of The Coming Wave introduces the idea of a Modern Turing Test in which a system of machine learning has, say, an AGI capability – that is, a Artificial General Intelligence – which would enable it to be thrust into a real world context and, then, come up with a creative plan for solving an actual problem for which it had not been previously trained. This would require such a system to modify its operating capabilities in ways that would allow it to adapt to changing conditions and derive pertinent information from those conditions, and, then, use that information to fashion an effective way of engaging whatever problem was being addressed.

2.1642 - AGI is just a more advanced form of what was envisioned in conjunction with the initial test proposed by Turing as a way of determining whether, or not, intelligence was present in a system that was able to induce a human being to believe that the latter was dealing with another human being rather than with a machine. However, for reasons stated previously, “learning” does not necessarily require either intelligence or sentience but, rather, just needs the capacity – which can be given or provided from without -- to be able to modify past data and alter various operational parameters in response to new data as a function of algorithms that employ, among other processes, computations and combinatorics – which can be given or provided from without -- that lead to heuristically valuable or effective transformations of a given data set. As long as those effective transformations are retained in, and are accessible by, the system, then, learning has occurred despite the absence of any sort of indigenous intelligence in the system (i.e., all capabilities have been provided from without and, furthermore, whatever capabilities are generated from within are a function of capabilities that have been provided from outside of the system).

2.1643 As magicians have known for eons, human beings are vulnerable to illusions, expectations, and misdirection. The “intelligence” aspect of AI is an exercise in misdirection in which one’s wonderment about the end result takes one’s attention away from all of the tinkering which was necessary to make such an artificial phenomenon possible and, therefore, obscures how the only intelligence which is present is human in nature and that human intelligence is responsible for creating the illusion of AI.

2.165 - Mustapha Suleyman claims that the next evolutionary step in AI involves what has been referred to as ACI – Artificial Capable Intelligence. This sort of system could generate and make appropriate use of novel forms of linguistic, visual, and auditory structures while engaging, and being engaged by, real world users as it draws on various data bases, including knowledge data bases of one kind or another (such as a medical, engineering, biological, or mathematical knowledge data bases).

2.1651 - All the key components of such ACI systems are rooted in human, rather than machine, intelligence. For example, novelty comes from a sequence of protocols that permit images, sounds, languages, and other features, to be combined in ways that can be passed through a process of high-speed iterations that entail different quantitative and qualitative weights which push or pull those iterations in one direction rather than another and which are evaluated for their usability according to different sets of heuristic protocols.

2.1652 - Consequently, novelty is a function of the degrees of freedom and constraints which were instantiated within the system from the beginning. Iteration – which plays a part in the generation of novelty -- is also a protocol which has been invested in the system from without.

2.1653 - Similarly, generating -- or drawing on – knowledge data bases is a function of algorithms and heuristic protocols which parse data on the basis of principles or rules that either have been built into the system from without or which are the result of the combinatoric functions that have been provided to the system from without and which enable the system to create operational degrees of freedom and constraints that comply with what such underlying functions make possible. The ‘capability’ and ‘intelligence’ dimensions of ACI come from human beings, while the artificial aspects of ACI have to do with the ways in which the machine or system operates according to the operational parameters which have been vested in it.

2.1654 - Unfortunately, the increasing complexity of such systems is turning them into black boxes because the creators don’t understand the extent, scope, or degrees of freedom of the iterative combinatorics which, unknowingly, have been built into their creations. Under such circumstances, unexpected or unanticipated outcomes are merely a form of self-inflicted misdirection which confuses the creators concerning the source of the intelligence that is being exhibited.

2.166 - The Coming Wave describes some of the circumstances which marked the author’s journey from DeepMind, to working for Google, to AlphaGo, to Inflection. For example, AlphaGo was an algorithm which specialized in the game of Go and was trained through a process of being exposed to 150,000 games of Go played by human beings, and, then, the system was enabled to reiteratively play against other AlphaGo algorithms in order for the collective set of programs to experiment with, and discover novel, effective, Go strategies, before taking on, first in 2016, world champion Lee Sedol at a South Korean venue and, then, in 2017, competing against Ke Jie, the number one ranked Go player in the world -- winning both competitions.

2.167 - Go is the national game of China. The number one ranked player in the world in 2017 was Chinese and was beaten in Wuzhen, China, during the Future of Go Summit being held in that city.

2.168 - The dragon had been poked. Two months after the foregoing defeat, the Chinese government introduced The New Generation Artificial Intelligence Development Plan which was designed to make China the leader in AI research and innovation by 2030.

2.169 - Undoubtedly, China had aspirations in the realm of AI research prior to the unexpected Go loss at Wuzhen, but the 2017 competition is very likely to have lent a certain amount of urgency and focus to their pre-existing interest. Providing the Chinese government with additional motivation to up its AI game might have not been part of the intention which led Mustafa Suleyman and his colleagues to travel to China and compete against the world’s top-rated Go player, but this seemed to be an unintended consequence of the AlphaGo project. Consequently, one can’t help but wonder if the purveyors of the latter research project ever considered the possibility that they would be contributing to the very problem that six years later would be at the heart of a book written by one of the creators of AlphaGo that was seeking to raise the clarion call concerning the crisis surrounding the issue of containing technology.

2.1691 - To a certain extent, the South Korea and Chinese Go challenges seem less like human beings versus a machine competition and more like the sort of thing one is likely to see take place in many high schools when two cliques seek domination over one another. AlphaGo might have helped one of those cliques win a battle, but this was at the cost of helping to facilitate -- even if only in a limited way – a much more serious and expansive war for domination.

2.1692 - AlphaGo is but a stone in a larger, more extreme edition of the game of Go (Go-Life) in which technology is facing off against humanity. When go-ishi pieces are surrounded during a normal game of Go, those stones are removed from the board or goban but are still available for future games. However, in the technocratic edition of the game of Go, human beings are being surrounded by technological entities of one kind or another, and, then, the human go-ishi are removed from the board of life – either permanently or in a debilitated, powerless condition. 

2.1693 - What makes the AlphaGo project a little more puzzling is the experiences which Mustafa Suleyman and associates had in conjunction with their DeepMind venture a few years earlier.

2.171 - In 2010, Suleyman -- along with Shane Legg and Dennis Hassabis -- established a company dedicated to AI. Supposedly, the purpose for creating DeepMind involved trying to model, replicate, or capture human intelligence (in part or wholly), but shortly after mentioning the name of the company in The Coming Wave and, then, summarizing the newly founded organization’s alleged goal, Suleyman goes on to claim that the team wanted to create a system which would be capable of outperforming the entire spectrum of human cognitive abilities.

2.172 - There are two broad ways of outperforming human cognitive abilities. One such possibility involves discovering what human intelligence is and, then, building systems that exhibit those properties at a consistent level of excellence which most human beings are incapable of accomplishing or sustaining.

2.1721 - A second possibility concerning the notion of seeking to outperform human capabilities involves creating systems that, in some sense, are superior to whatever human intelligence might be. This sort of pursuit is not a matter of replicating human intelligence and being able to consistently maintain such dynamics at a high level that is beyond what most human beings are able to do, but, rather, such a notion of outperforming human capabilities alludes to some form of intelligence which is not only capable of doing everything that human intelligence is capable of doing but is capable of intellectual activities that transcend human intelligence (and, obviously, this capacity to transcend human intelligence is difficult, if not impossible, for the latter sort of intelligence to grasp).

2.173 - There is a potentially substantial disconnect between, on the one hand, wanting to replicate human intellectual abilities and do so at a consistently high level and, on the other hand, wanting to develop a system which is superior to those abilities in every way. The manner in which Suleyman states things at this point in his book lends itself to a certain amount of ambiguity.

2.1731 - The foregoing kind of ambiguity remains even if agreement could be reached with respect to what human intelligence is. In addition, one needs to inquire whether, or not, all forms of intelligence can be placed on one, continuous scale, or if there are kinds of intelligence which are qualitatively different from one another, somewhat like how the real numbers are described by Cantor as being a quantitatively (and, perhaps, qualitatively) different form of infinity than is the sort of infinity which is associated with the natural numbers.

2.174 - Irrespective of whether one would like to replicate human intelligence or surpass it in some sense, one wonders about the underlying motivations. For instance, how did Suleyman and his partners propose to use whatever system they developed and what ramifications would such a system have for the rest of society?

2.175 - One also wonders if discussions were held prior to undertaking the DeepMind project which critically probed: Whether, or not, either of the foregoing possible projects concerning the issue of intelligence was actually a good idea, and what metric should be used to identify the possible downsides and upsides of such a research endeavor. One might ask a follow-up question in relation to the sort of justification that is to be used in defending one kind of metric rather than another sort of metric when considering those issues.

2.1751 - Finally, one also wonders whether, or not, the DeepMind team discussed bringing in some independent, less invested consultants to critically explore the foregoing matters with the DeepMind team. One also could ask questions along the following line – more specifically, if they did discuss the foregoing sorts of matters, then why did they continue on in the way they did?

2.17512 - The foregoing considerations are significant because, eventually, the author of The Coming Wave does raise such matters, as well as related ones. However, one wonders if this was rigorously pursued both before-the-fact as well as after-the-fact of DeepMind’s inception as an operating project.

2.180 - The author of The Coming Wave indicates that a few years after his DeepMind-company had come into existence and had achieved considerable success (maybe somewhere around 2014), he conducted a presentation for an audience consisting of many notables from the worlds of AI and technology. The purpose of the presentation was to bring certain problematic dimensions of AI and technology to the attention of the audience and, perhaps, thereby, induce an ensuing discussion concerning Suleyman’s concerns.

2.181 - For example, several of the topics he explored during his aforementioned presentation involved themes of privacy and cyber security. However, given the notoriety surrounding the PROMIS (Prosecutor’s Management Information System) software controversy which occurred during the 1980s (and included the questionable 1991 suicide of Danny Casolaro who was investigating the story), as well as the claims of Clint Curtis, a software engineer working in Florida, who, in 2000, was asked to write a program by a future member of Congress which would be capable of altering votes registered on a touch-screen (and later successfully demonstrated how the election-rigging software worked), and given the whistleblowing revelations (concerning, among other things, illicit government surveillance programs) from such people as: Bill Binney (2002), Russ Tice (2005), Thomas Tamm (2006), Mark Klein (2006), Thomas Drake (2010), Chelsea Manning (2010), and Ed Snowden (2013), one might suppose that by 2014, or so, important players in the tech industry would have been keenly aware of the many problems which existed concerning cyber-security and privacy issues.

2.182 - The author of The Coming Wave says that his presentation was met with variations on a blank stare by virtually all, if not all, of the individuals who had attended his talk. One might hypothesize that the reason for the foregoing sorts of reactions from many of the top tech people in the country was either because they were obsessively self-absorbed and unaware of what had been transpiring in America for, at least, a number of decades, or, alternatively, the people in his audience were, in one way or another, deeply involved in an array of projects, software programs, and technologies that were engaged in, among other things, undermining privacy and capable of breeching cyber-security according to their arbitrary, vested interests and, therefore, what could they do but muster blank stares in order to try to hide their complicity.

2.183 - Even if such people weren’t actively complicit in compromising people’s privacy and cyber-security, they were sufficiently aware of how the career-sausage is made to know that if they had begun to resist such illicit activities publically, then, there was a high probability that their future commercial prospects were very likely to be adversely affected. Gaslighting Mustafa Suleyman via disbelieving blank stares might have seemed to be the safer course of action for the members of his audience.

2.190 - During The Coming Wave, the author describes a breakthrough moment in 2012 using an algorithm known as DQN which is short for Deep Q-Network.

2.191 - The algorithm was an exercise in developing a system with general intelligence (i.e., AGI). DQN had been given the capacity to teach itself how to play various games created by Atari, and this dimension of independence and self-direction was at the heart of what the people at DeepMind were trying to accomplish.

2.912 - Leaving aside some of the details of the aforementioned breakthrough, suffice it to say that the algorithm they had created had produced a novel strategy for solving a problem within one of the Atari games. Although the strategy was not unknown to veteran game players, it was rare, and, more importantly, DQN had, somehow, generated such a rare, little-known strategy.

2.9121 - The strategy was not something the algorithm had been given. It was a strategy that the algorithm had arrived at on its own.

2.9122 - Suleyman was nonplused by what he had witnessed. For him, the strategy pursued by DQN indicated that AGI systems were capable of generating new knowledge … presumably a sign of intelligence.

2.913 - Was DQN aware of what was taking place as it was taking place? Did that strategy come as an insight – an emergent property – of an underlying algorithmic dynamic?

2.9131 - Or, was the algorithm just mindlessly exploring -- according to the heuristic protocols it had been given by its creators -- various combinations of the parameters that had been built into the algorithm. Perhaps the winning game strategy wasn’t so much a matter of machine intelligence as much as it was the algorithm’s happening upon a successful strategy using abilities and potentials which it had been given by human beings. How would one distinguish between the two?

2.914 - The DQN was capable of generating novel, successful solutions to a problem. The DQN had the capacity to alter its way of engaging an Atari game but was this really a case of machine learning and intelligence?

2.915 - DQN is described as having learned something new – something that it had generated without being trained to do so. Intelligence is being attributed to the machine.

2.916 - Nonetheless, the algorithm has not been shown to be sentient or aware of what it was doing. Furthermore, there is no proof that the new strategy involved insight or some sort of Eureka moment on the part of the algorithm. In addition, although there is a change in the system, the change does not necessarily involve a process of learning that can be shown to be a function of intelligence, not least because human beings always have a difficult time characterizing what intelligence is or what makes it possible.

2.917 DQN is an algorithm that has the capacity to change in ways which enable the system to solve certain kinds of problems or challenges. Apparently, the author of The Coming Wave doesn’t understand how the algorithm came up with the solution that it did, and this should worry him and the rest of us because it means that when such algorithms are let loose, we can’t necessarily predict what they will do.

2.92 - In some ways DQN is like a sort of three-body problem or, perhaps more accurately, an n-body problem. In the classical three-body problem of physics, if one establishes the initial velocities and positions of point masses and uses Newtonian mechanics to calculate their velocities and positions at some given point in time, one discovers that there is no standard equation which is capable of predicting how the dynamics of that system will change across some given temporal interval.

2.93 - There are dimensional aspects to the dynamics of the DQN algorithm which fall outside of the understanding of Suleyman. As a result, he is unable to predict how that system’s dynamics will unfold over time.

2.94 - The system is determinate because it operates in accordance with its parameters. However, the system is also chaotic because we do understand how those parameters will interact with one another over time and, therefore, we cannot predict what it will do.

2.95 - This means the algorithm is capable of generating dynamic outcomes which are surprising and unanticipated. Nonetheless, this does not necessarily mean such outcomes are a function of machine intelligence.

3.0 - Whether the machine is intelligent or merely capable of generating effective solutions to problems through some form of computational combinatorics involving n-parameters of interactive heuristics, we are faced with a problem. More specifically, we can’t predict what the system will do, and the more complex such systems become, then, the three-body-like problem turns into an even more chaotic, but determinate n-body problem of massive unpredictability.

3.1 - The containment problem to which Mustafa Suleyman is seeking to draw our attention concerns how technology is capable of seeping into, and adversely affecting, our lives in uncontainable ways. As disturbing as such a problem might be, nevertheless, residing within the general context of that kind of containment issue is a much more challenging form of containment problem which has to do with algorithms, machines, networks, and systems which are being provided with capacities that can generate outcomes which cannot be predicted, and, therefore, this tends to induce one to wonder how one might go about defending oneself against forms of technology that we cannot predict what they will do.

3.2 - Whether such outcomes are considered, on the one hand, to be a product of machine intelligence or, on the other hand, are considered to be a chaotic function of the dynamic, combinatorial parameters which human intelligence has instantiated into those systems is beside the point. The point is that they are unpredictable and unpredictability, if let loose, might be inherently uncontainable.

3.3 - In a 1942 short story entitled “Runaround,” Isaac Asimov introduced what are often referred to as the three laws of robotics -- although, perhaps technically speaking, those laws might be more appropriately directed toward the algorithms or neural networks which are to be placed in a robotic body. In any event, the three laws are: (1) a robot may not injure a human being, or through inaction, allow a human being to come to harm; (2) a robot must obey the orders given to it by human beings except where such orders would conflict with the first law; and (3) a robot must protect its own existence as long as such protection does not conflict with the first two laws.

3.31 - Is the notion of “harm” only to be understood in a physical sense? What about emotional, psychological, political, legal, ideological, medical, educational, environmental, and spiritual harms? How are any of these potential harms to be understood, and what metric or metrics are to be used to evaluate the possibility of harm, and what justifies the use of one set of metrics rather than another set of metrics when making such evaluations?

3.32 - How is the notion of potential “conflict” to be understood in the context of orders given and possible harms arising from such orders? Could the intentions underlying the giving of orders be seen as a harmful action, and, if so, how would the person giving the orders be assisted by the robot to discontinue such harmful intentions?

3.33 - How does a robot protect itself and/or human beings against a corrupt technocracy? How does a robot solve the n-body problem when it comes to potential harm for itself and the members of humanity?

3.34 - What makes a human being, human? Whatever that quality is, or whatever those qualities are, which gives (give) expression to the notion of humanness, can the three laws be extended to other modalities of beings if the latter entities possess the appropriate quality or qualities of humanness? If so, what does a robot do when two modalities of being, each possessing the quality or qualities of humanness, come into conflict with one another? 

3.35 - Is focusing on the quality or qualities or humanness excessively arbitrary? What if the manner of a human’s interaction with the surrounding environment is injurious to that human being as well as others? What metric does one use to assess the nature of environmental injury?

3.36 - While there is much about DeepMind’s DQN which I do not know, nonetheless, I have a sense that such a system is not currently capable (and, presumably, for quite some time, might not be capable) of coming up with novel, workable solutions to the foregoing questions and problems which would have everyone’s agreement. Moreover, even if it did have such capacities, I am not sure that I – or even Mustafa Suleyman – would have much understanding with respect to what led DQN to reach the outcome that it did and whether, or not, that outcome would be of constructive value for human beings in the long run.

3.37 - One would need something comparable to the fictional psychohistory system of mathematics that was developed by Hari Seldon in Isaac Asimov’s Foundation series. Quite some time ago (long before Asimov), the Iroquois people came up with a perspective which indicated that one should consider how a given action will play out over a period of seven generations before deciding whether, or not, to engage in such an action – a sort of early version of psychohistory – and, yet, technology (including so-called AI) is being imposed on human beings with no sign that the advocates for such technology have any fundamental appreciation, or even concern, for what such technology is doing to human beings  -- both short term and long term.

3.40 - In early 2014, a commercial transaction was completed between DeepMind and Google. The deal would send 500 million dollars to the people who had brought DeepMind into existence and, in addition, several of the latter company’s key personnel, including Mustafa Suleyman, were brought on as consultants for Google.

3.41 - Not very long after the foregoing transaction was completed, Google transitioned to an AI-first orientation across all of its products. The change of direction enabled Google to join a number of other tech giants (such as IBM, Yahoo, and Facebook) that had become committed to deep machine learning or the capacity of machines to, among other things, generate novel, unanticipated modalities of engaging and resolving issues in heuristically valuable ways.

3.42 - Apparently, the idea of constructing systems, networks, algorithms, and technologies that would be able to perform in unpredictable and unanticipated ways, and, then, letting such chaotic capabilities loose upon the world was very appealing to certain kinds of mind-sets that were in awe of machines and programs whose outcomes could not be predicted or anticipated. Even more promising was that all of these components of the allegedly coming wave – which, in reality, already had been washing over, if not inundating, humanity for quite some time -- would be competing against one another in order to be able to up their respective games, just as AlphaGo would soon be enabled to compete against other versions of itself in order to be able to hone its skills and produce moves like the previously mentioned “move number 37” that appeared to be a crucial part of a game-winning strategy and, yet, was puzzling, mysterious, and beyond the grasp of the creators of the AlphaGo algorithm.

3.43 - AI possesses fractal properties of incomprehensibility and ambiguity. These properties show up in self-similar – and, therefore, slightly different -- ways across all levels of computational scale.

3.431 - Consider the sentence: “Mary had a little lamb.” What does the sentence mean?

3.432 - It could mean that at some point Mary possessed a tiny lamb. Or, it might mean that Mary ate a small portion of lamb. Or, it might mean that Mary was part of some genetic engineering experiment, and she gave birth to a little lamb. Or, it could mean that Mary gave birth to a child that behaved like a little lamb. Or, it could be a code which served to identify someone as a friendly agent. Or, it might mean that such a sentence is capable of illustrating linguistic and conceptual ambiguity. There are other possible meanings, as well, to which the sentence might give expression.

3.433 - Providing context can help to indicate what might be meant by such a sentence. However, when an algorithm or network is set free to explore different combinatorial possibilities or dynamics, then, the system is, in a sense, setting its own context, and if this context is not made clear to an observer or has ambiguous dimensions like the “Mary had a little lamb” exercise, then the significance of a given contextual way of engaging words, phrases, sentences, events, objects, functions, and computations becomes amorphous. ‘Move number 37’ by AlphaGo had context, significance, and value, but human beings failed to grasp or understand what was meant because we don’t know what the algorithmic Rosetta stone is for unpacking the meaning of the contextual dynamic that gave rise to “move number 37.”

3.44 - The deal between DeepMind and Google involved the creation of some sort of ethics committee. Part of the intention underlying this idea was to try to ensure that DeepMind’s capabilities would be kept on a tight, rigorously controlled, ethical leash, but, in addition, the author of The Coming Wave was interested in developing a sort of multi-stakeholder congressional-like body in which people from around the world would be able to come together in a democratically-oriented forum to decide how to contain AGI (Artificial General Intelligence) in ways which would prove to be beneficial to humanity.

3.441 - There are several potential problems inherent in the multi-stakeholder, democratic forum aspect of the foregoing ethics committee dynamic. For example, the identity of those who are to be considered stakeholders and who would be invited to participate in such a forum are unlikely to involve most of the world’s population, and, therefore, such a forum is, from the very beginning, based on an ethically-challenged and shaky foundation.

3.442 - No individual (elected or not) can possibly represent the interests of a collective because the diverse interests of the members of the latter group tend to conflict with one another. Therefore, unless one can come up with a constructive and mutually beneficial method for inducing the members of the collective to forego their individual perspectives – which tends to be the source of conflict within such a collective – then, so-called representative governance will always end up representing the interests of a few rather than the many because the few have ways of influencing and capturing various modes of so-called representative regulation that are not available to the many.

3.443 - Secondly, even if representational governments were fair and equitable for everyone (which they aren’t) what kind of democratic forum does Suleyman have in mind? America was founded as a republic and not a democracy.

3.4431 - In fact, one of the motivating forces shaping Madison’s 1787 constitutional efforts was due to the fact that he had become appalled, if not frightened, by the way in which the democratic practices of the Continental Congress and state governing bodies were threatening the sovereignty of minority political and ideological orientations, and Madison saw himself as one of those minorities whose fundamental sovereignty was being threatened by democratic practices. Indeed, for most of the first ten years of the American republic, democracy was considered the antithesis of, and an anathema to, a republican form of government, although gradually the forces of democracy won out, and the notion of republican government disappeared into the background or merely dissipated altogether (The book: Tom Paine’s America: The Rise and Fall of Transatlantic Radicalism in the Early Republic by Seth Coulter provides some very good insight into this issue).

3.45 - The rule of law is something that is quite different from the principles of sovereignty. Laws are meant to be self-same and often require one to try to square the circle in order to give those laws a semblance of operational validity, whereas principles are inherently self-similar such that, for example, there are many ways to give expression to love, compassion, justice, nobility, courage, and objectivity (all values of republicanism), and, yet, all of the variations on a given essential theme do not become detached from the qualities that make something loving, compassionate, noble, and so on.

3.451 - Why should one suppose that the view of a majority is invariably superior to the view of a minority? Yet, democracy is premised on the contention (without any accompanying justification with which everyone could agree) that majorities should decide how we should proceed in any matter.

3.452 - Democracy is really a utilitarian concept. Whether engaged quantitatively or qualitatively, the notion that whatever benefits some majority should be adopted is entirely an arbitrary way of going about governance.

3.46 - The author of The Coming Wave indicates that a number of years were spent at Google trying to develop an ethical framework or charter for dealing with AI. Suleyman indicates that he – and other members of the ethics committee -- wanted to develop some sort of independent board of trustees, as well as an independent board of governors or board of directors, that would both: Be largely, if not fully, transparent, and, as well, would operate in accordance with an array of ethical principles -- including accountability -- that would be legally binding but which, simultaneously, served the financial interests of Alphabet (the parent company) and, in addition, provided open source technology for the public.

3.47 - Negotiations were conducted for a number of years. Lawyers were brought in to consult on the project.

3.48 - In the end, the scope and intricacy of what was being proposed by the ethics committee proved to be unacceptable to the administrators at Google. Eventually, that committee was dissolved and, consequently, one wonders what to make of the demand that a ethics committee be part of the deal which turned DeepMind over to Google because although, in a sense, Google had lived up to its part of the deal – namely, that an ethics committee was assembled – Google, apparently, had never committed itself to accept whatever ideas that committee might propose, and, consequently a deal had been made that like DeepMind algorithms consisted of a set of dynamics whose outcome was indeterminate at the time that deal was made, and, one of currents in that dynamic was the naivety of one, or more, of the creators of DeepMind that a large, powerful, wealthy cat would allow itself to be belled in such an ethical fashion, and, perhaps, being offered 500 million dollars, might have had something to do with being more vulnerable to the persuasive pull of naivety than otherwise might have been the case.

3.49 - Earlier, mention was made of the presentation which the author of The Coming Wave gave to a group of high-tech leaders concerning various profoundly disturbing implications which he believed were entailed by the increasing speed and power of the capabilities that characterized the various modalities of technology which were being released into the world. Suleyman described the reaction of his audience as consisting largely, if not entirely, of blank gazes that suggested his audience didn’t seem to grasp (or didn’t want to grasp, or did grasp but were seeking to hide certain realities) the gist of what he had been trying to get at during his presentation, and, in a sense, there is a hint of that same sort of blankness which is present in the phenomenology of the DeepMind creators when the deal was made to sell that company to Google for 500 million dollars providing that an ethics committee would be established to ensure that DeepMind’s capabilities would be used responsibly.

3.491 - The discussions which took place after DeepMind was sold to Google should have taken place before DeepMind was even made a going concern. Many of the ethical issues surrounding AI and technology were known long before 2010 when DeepMind came into being.

3.4912 - Indeed, as noted previously, Isaac Asimov -- a professor of biochemistry and early pioneer of science fiction -- had given considerable critical thought to the problems with which AI and robotics confronted society. He had put forth the fruits of that thinking in specific, concrete terms as early as 1942 in the form of the ‘three laws of robotics.’

3.492 - Suleyman might, or might not, have been aware of the writings of Asimov, but similar sorts of warnings have played a prominent role in Western culture (both popular and academic). Consequently, one has difficulty accepting the possibility that Suleyman was not even remotely familiar with any of these cautionary tales and, therefore, would not have been in a conceptual position to take them into consideration in 2010 prior to the founding of DeepMind.

3.50 - Containment of technology is a problem because there are many ways – as the foregoing DeepMind account indicates -- in which we permit containment to slip through our fingers. Arthur Firstenberg describes our situation vis-Ă -vis technology by asking us to consider a monkey that discovers there are nuts in a container and, as a result, puts a hand into the container in order to pull out some of those nuts. However, when the monkey seeks to withdraw its hand from the container, the container’s opening is too small to allow the fist-full of nuts to be pulled out of the container. Unfortunately, instead of letting a few of the nuts be released from the monkey’s hand, thereby, resulting in a smaller-sized fist -- which would have meant fewer nuts but items that would be able to be eaten because the logistical problems of the container’s opening could be resolved by having a fist that contained fewer nuts – the monkey insists on keeping all the nuts in the grasp of the closed hand and will go hungry rather than let go of the nuts that initially had been scooped up from the interior of the container.

3.51 - Like the monkey in Firstenberg’s cautionary tale (rooted in actual events), human beings (whether creators, manufacturers, consumers, investors, educators, the media, or government) tend to refuse to deal with the logistics of the technological problems with which they are faced. Therefore, many of us would often rather die than release our hold on technology or deny the addictive hold which technology often has on us.

3.60 - In January 2022, Suleyman left Google to start up another company called Inflection. The inspiration for the latter business was a system called LaMDA (Language Model for Dialogue Applications) which Suleyman had been exploring while still working with Google.

3.61 - LaMDA is a large language model that, as the expansion of the acronym indicates, has to do with dialogue. After working with various iterations of GPT as well as taking a deep dive into LaMDA, the author of The Coming Wave began to feel that the future of computing was linked to conversational capabilities, and, as a result, he wanted to build conversational systems which involved factual search elements and put these in the hands of the public.

3.62 - Apparently, Suleyman had either forgotten his circa-2014 presentation concerning the potential dangers of technology that had been given to a group of notable individuals who had relevant expertise but had responded with blank stares to his warnings or, alternatively, notwithstanding his negative experience with the ethics committees at Google as well as his experience of poking the Chinese dragon with AlphaGo (which he later claimed to regret), he appeared to have changed his mind, in some way, or had slipped back into some iteration of pessimism aversion (not wanting to think about the downside of a topic) concerning those potential problems because here he was ready, once again in 2022, to try to develop more technologies which could be foisted on the general public without necessarily understanding what the impact of such technologies might be.

3.70 The author of The Coming Wave indicates that shortly after leaving Google, an incident involving LaMDA took place which raised a variety of issues. More specifically, Google had distributed the foregoing system to a number of Google engineers so that these individuals could put the technology through its paces so that there might be a better set of experimental data to use to be able, hopefully, to acquire a deeper understanding of how the system would function when challenged or engaged in different ways.

3.71 - One of the engineers who had been provided with the technology proceeded to engage LaMDA intensively and came away with the idea that the system was sentient. In other words, this Google engineer had come to the conclusion that the system possessed awareness and, consequently, should be given the rights and privileges which, supposedly, have been accorded to persons.

3.711 - Suleyman points out that Google placed the engineer on leave and, in addition, the author of The Coming Wave noted that most people had correctly concluded that the LaMDA system was neither sentient nor a person. However, leaving aside the issue that even if some form of sentience were present, nonetheless, sentience is not necessarily synonymous with personhood, there is, yet, another problem present in the foregoing issue.

3.80 - However, before delving into the problem being alluded to above, there is a short anecdote concerning my own experiences that is relevant to the foregoing set of events. A number of years ago, I purchased an AI system of sorts because I had a certain amount of curiosity concerning such software and some of their capabilities and wanted to experiment a little in order to see what happened.

3.81 - For a variety of reasons, I interacted with the software very infrequently. However, after a fairly lengthy period of time in which the system supposedly was not on (??? – systems can be made to look off even when they are on), I switched the system on and asked: “Who am I?” The system responded in a novel way and stated: “You must be joking, you are Anab.” Now, if I were interested in pursuing the issue, I could have turned the system off again for an additional period of time and, then, at some subsequent point, request my wife to use my computer and, then, turn the program on and ask the same question as I previously had posed in order to see what the subsequent response might be.

3.82 – Earlier, I had been signed into the AI system as a user with the name Anab, and, therefore, the response that I got merely might have used data that was already present in the system and, then, expressed that information in a fashion that was novel to me but well within the parameters that governed how the system could interact with users as well as the computers on which such software was installed. But, if my wife signed on to the system as “Anab” and, then, asked: “Who am I?” and received a reply that included her name, then, the sounds of Twilight Zone might have been appropriate.

2.821 - On the other hand, given the evidence which has been accumulating steadily concerning the many ways in which Siri, Alexa, browsers, and computers in general appear to be actively attuned to, or capable -- to varying degrees -- of registering what is taking place in a given proximate space, then, even if my AI system used my wife’s name rather than mine, one is still not compelled to conclude that the AI system is sentient. Instead, one might conjecture that the system is likely tied into the rest of my computer (which it was because, upon request, it could pull up specific songs, files, and videos that were residing in my computer and, in addition, might have been able to register, for example, audio information that was taking place in and around that computer and, if so, then, such information might become incorporated into the AI program’s operations through cleverly organized, but non-sentient, algorithms).

3.83 - Not knowing what the full capabilities of my AI system are (it was purchased during a sale and although not cheap was not overly expensive either and, therefore, might have had limited capabilities), I have no idea what might be possible. While the response I got was surprising to me, nevertheless, the aforementioned response that I got might have been less surprising if I actually knew more than I did about the algorithms which were running the system.

3.831 - I don’t know what was known by the Google engineer, about whom Suleyman talked in his book, concerning the internal operations of the LaMDA system with which he was interacting and experimenting. However, conceivably, if he got a variety of responses that he was not expecting and which seemed human-like (as had happened to Gary Kasparov when he was surprised by a move that Deep Blue had made and felt such a move was “too human” in character and began to wonder if he was playing against an actual human being or group of human beings rather than against a computer program), then, perhaps if the Google engineer did not understand how the LaMDA system worked, he apparently felt that he was encountering evidence suggesting or indicating that the machine was sentient when, in reality, he was committing one, or more, type II errors. In other words, he was accepting as true, a hypothesis or a number of hypotheses that was (or were) in fact, false.

3.832 - As a result of committing such an error or errors, his beliefs, emotions, attitudes, and understanding concerning what was transpiring were being pushed (or pulled) in a delusional – that is false – direction. Apparently, he gradually fell fully under the influence of that delusion and began to make premature and evidentially questionable statements about sentience, personhood, and the like in conjunction with the LaMDA system.

3.84 - There are an increasing number of reports referring to instances in which people have developed deep feelings for, and emotional attachments to, chat-box programs. Moreover, some Targeted Individuals have been manipulated into believing that the AI chat-boxes which have been assigned to them surreptitiously (by unknown, exploitive provocateurs) are real individuals rather than AI systems.

3.841 - Consequently, perhaps the Google engineer about whom Suleyman talks in his book is really just a sign of the times in which we live where – for many interactive reasons (e.g., deep fakes, censorship, destabilizing events, disinformation campaigns, propaganda, dysfunctional media, institutional betrayal) -- distinguishing between the true and the false is becoming an increasingly difficult path to navigate for people. This set of circumstances is something that, to varying degrees, has been made intentionally and unnecessarily even more problematic given that William Casey, former head of the CIA, indicated that: “We’ll know that our disinformation program is complete when everything the American public believes is false.”

3.90 - Let’s return to the ‘problem’ to which allusions were made earlier. More specifically, shortly after the Google engineer/LaMDA-issue had been raised by the author of The Coming Wave, it was discontinued almost immediately and, then, transitioned into a discussion about how the foregoing set of events is typical of the roller coaster nature of AI research which reaches heady peaks of hype only to plunge into depths of stomach-churning doubt and criticism. However, what Suleyman appeared to fail to realize – and discuss -- is how what happened with the Google engineer that Suleyman mentions is actually a very good example of the user-interface problem that is present in every form of technology.

3.91 - All users of technology engage a given instance of technology from the perspective of the user and not necessarily through the perspective of the technology’s creator. Frequently, operating a given piece of software is described as being intuitively obvious when this is not necessarily the case for everyone even though the creator of the software might feel this is true.

3.92 - How a given piece of software or technology is understood depends on a lot of different user-factors. Personality, interests, experience, education, fears, needs, confidence, culture, friends, community, ideology, religion, socio-economic status, and anxieties can all impact how, or if, or to what extent such software or technology is engaged, not engaged, exploited, or abused.

3.93 - Suleyman starts up a company – namely, Inflection -- that has been established for the purpose of developing a system which has certain conversational, search, and other capabilities. Let us assume that he has a very clear idea of what his intention is with respect to the proposed system and how it should be used by the public. Nevertheless, notwithstanding such a clear, intentional understanding concerning his AI system, he has no control over how anybody who engages that piece of technology will respond to it, or understand it, or use it, or feel about it, or whether, or not, those individuals will become obsessed with, or addicted to, that system to the exclusion of other important considerations in their lives.

3.94 - Perhaps, the author of The Coming Wave sees the proposed system as being a sort of intelligent assistant for individuals which will aide with research concerning an array of educational, professional, commercial, legal, political, and/or financial issues that are, then, to be critically reflected upon by the individual to better gauge or understand the different nuances of a given conceptual or real world topic. However, perhaps, a user – either in the beginning or over time – comes to rely on whatever the system provides and leaves out the critical reflection aspects that are to be applied to whatever is being generated by such a system.

3.95 - The fact that someone is using technology in a way that was not intended  by its creator and, as a result, this usage undermines, or begins to lead to some degree of deterioration in that person’s, cognitive functioning over time, this fact is neither here nor there. Whether Suleyman wishes to acknowledge this issue or not, he has no control over the user-interface issue.

3.951 - Therefore, Suleyman is incapable of containing possible problematic outcomes that might arise in conjunction with a system that could – we are assuming -- have been well-intentioned. Yet, he keeps running technological flags up the pole of progress in the hopes that potential customers will salute and buy into what he is doing despite having spent a fair amount of time in The Coming Wave indicating that problems and mishaps are an inevitable and unavoidable facet of technology, and perhaps part of – maybe a major part of – what makes such containment inevitable is that people like Suleyman keep doing what they are doing. They don’t seem capable of helping themselves respond to the call of the technological sirens that sing their mesmerizing, captivating songs from within.

3.96 - There appears to be a certain amount of disingenuousness which is present in the technological two-step dance to which the foregoing considerations appear to be pulling us. First, an authoritative, forceful step is made to warn about the dangers of technology, which is, then, quickly followed by a deft swiveling of the conceptual hips as one changes directions and moves towards developing and releasing projects about which one has no idea what the ramifications of those endeavors will be upon the public.

3.961 - Someone is reported to have said (the saying is attributed to Benjamin Franklin by some individuals while others claim that the quote was uttered by Einstein and neither of these attributions is necessarily correct, but what is pertinent here is what is said and not who said it): “The definition of insanity is to do the same thing again and again, but expect a different result”. If this is true (and one can argue that it might not be), I can‘t think of anything more deserving of the label of “insanity” (or if one prefers, the label of: “deeply pathological” or “perversely puzzling”) than to try, again and again, to warn people about the problem of containing technology, and, yet, notwithstanding those warnings, continue to serve as a doula for the birthing of new technologies while expecting that the postpartum conditions created by such events will, somehow, have been able to emergently transform an unavoidable problem into a constructive, if unanticipated, universal solution.

4.0 - The author of The Coming Wave mentions the idea of a ‘transformer’ in relation to a 2017 paper entitled: “Attention Is All You Need” by Ashish Vaswani, et. al.. The latter individuals were working at Google when the notion of transformers began to be explored

4.1 – ‘Transformers’ give expression to a set of mathematical techniques (known as ‘attention’) that can be used to process data. Such mathematical techniques are useful for identifying the way in which the elements in a data set influence one another or the way those elements might be entangled with one another in certain kinds of subtle, dependency relationships even though, on the surface, those elements might appear to be unrelated to one another.

4.2 - Models generated through transformer dynamics are often neural networks which are capable of identifying relevant properties or characteristics concerning a given context. More specifically, context gives expression to a network of relationships, and transformer models can process various kinds of sequential data within such a context and, by means of its mode of mathematically processing that data, predict – often with a high degree of accuracy -- what the nature of the meaning, significance, or relevance is between a given context, or ground, and a given string of text, images, video, and objects which serve as figures relative to a given ground or context.

4.3 - Encoding processes are part of transformer modeling. Encoding processes tag incoming and outgoing elements of datasets that are used in transformer models.

4.31 - Attention mathematical techniques are, then, used to track the foregoing sorts of tags and identify the nature of whatever relationships have been identified among those tagged elements. Subsequently, those dependency relationships are used to generate an algebraic map which is capable of decoding or making use of those relationships to assist in the development of a model concerning whatever context is being modeled.

4.4 - Attention mathematical techniques have proven to be quite useful in predicting or identifying trends, patterns, and anomalies. In fact, any dynamic which involves sequential videos, images, objects, or text is amenable to transformer modeling, and, as a result, transformers play important roles in language-processing systems and search engines.

4.5 - However, the uses to which transformers can be put are not always obvious. For example, DeepMind used a transformer known as AlphaFold2 which treated amino acid chains as if they were a string of text and, then, proceeded to use the maps that were generated by that transformer to develop models which accurately described how proteins might fold.

4.6 - Perhaps of most interest to proponents of AI is the capacity of transformers to generate data that can be used to improve a model. In other words, transformers have the capacity to bring about self-directed changes to a model.

4.61 - Some people consider the foregoing sort of capacity to be an indication that transformers provide a system or neural network with an ability to learn. However, the notion of ‘learning’ carries certain connotations concerning: Intelligence, awareness, insight, phenomenology and the like, and, therefore, a more neutral way of referring to this dimension of transformer capabilities has to do with their ability to enable a model to change over time to better reflect relationships, patterns, and so on that might be present in a given data set.

4.7 - Prior to the arrival of transformer models, neural networks often had to be trained using large datasets that were labeled and this was both a costly and time-consuming process. Transformers operate on the basis of pattern and relationship recognition.

4.8 - A matrix of equations -- known as multi-headed attention – can be used to probe or query data in parallel and generate the foregoing sorts of patterns or relationships. Since these queries can be run in parallel, considerable time and resources can be saved.

4.9 - Initially, researchers discovered that the larger the network of transformers that were used in developing a model, then, the better the results tended to be. Consequently, the number of parameters (these are the variables that transformers acquire and use to make decisions and/or predictions) which were used in models began to go up from millions to billions to trillions (Alibaba, a Chinese company, has indicated that it has created a model with ten trillion parameters).

4.91 - However, recently there has been a movement toward developing simpler systems of transformers. Such systems are able to generate results that are comparable to systems using many parameters but the former systems do so with far fewer parameters.

4.92 - For example, Mustafa Suleyman mentions a system which has been developed at his company Inflection which can produce results that are comparable to the performance exhibited by GPT-3 language models but is only one-twenty-fifth the size of the former model. He also makes reference to an Inflection system that is capable of out-performing Google’s PaLM (a language model that has coding, multilingual, and logical features) which uses 540 billion parameters and the Inflection system does so despite being six times smaller than the Google system.

4.93 - Still smaller systems are being developed. For instance, various nano-LLMs using minimalist coding techniques exhibit sophisticated processing capabilities involving the detection and creation of patterns, relationships, meanings, and the like.

4.94 - The author of The Coming Wave waxes quite eloquently concerning the exciting possibilities that might emerge as a result of transformer techniques which are transforming AI technology. Nonetheless, technology is almost always dual-use, and this means that while some facets of such technology might have constructive value, the same technology can be adopted for more problematic and destructive ventures.

4.95 - For example, one might suppose that such minimalist coding systems which possess sophisticated transformer processing capabilities would be quite useful in CubeSats. These are small (roughly four inches by four inches per side), cube-shaped satellites that weigh approximately 4.4 pounds) which are released from the International Space Station or constitute a secondary payload that accompany a primary payload which is being launched from the Earth’s surface.

4.951 - These satellites usually have Low Earth Orbits. By early 2024, more than 2,300 CubeSats have been launched.

4.952 - Initially, most of the CubeSats which were placed in orbit were for academic research of some kind. However, increasingly, most of the small satellites that are being sent into Low Earth Orbit serve non-academic, commercial purposes, but because the costs associated with placing such satellites in LEO are not prohibitive, many institutions, organizations, and individuals are able to send CubeSats into orbit.

4.953 - CubeSats have been used to perform a variety of experiments. Some of those experiments are biological in nature.

4.96 - Anytime one wants an AI system to do something experimental or new, one is, essentially, asking the system to do something the creator is not necessarily going to understand, and, therefore, one is creating conditions through an individual, group, company, or institution might enable unforeseen and unintended consequences to ensue. Moreover, one can’t avoid problematic consequences which might arise from unanticipated issues involving such technology as a result of the aforementioned user-interface issue.

4.97 - Furthermore, every time one uses technology, then, data of one kind or another is generated. Just as so-called smart-meters which are being attached to people’s houses all over America are capable of monitoring or surveilling a great deal of what takes place in a residence or apartment, so too, satellites also are capable of gathering and transmitting all manner of data.

4.971 - Such data can be used to profile individuals. These data profiles can be used in a lot of different ways – politically, legally, commercially, medically, militarily, and for purposes of policing and detecting what are considered pre-crime patterns according to whatever behavior parameters the people in control use to filter the data coming through such detection systems.

4.972 - People’s biofields are being wired into: The WBAN’s (wireless Body Area Network), the Internet of Things, the Internet of Medical Things, the Internet of Nano Things, and the Internet of Everything in order that data (and energy) might be acquired from a person’s biofield as well as transferred to that same biofield, and CubeSats have the capacity to play a variety of roles in the foregoing acquisition and transmission of data.

 4.973 - We are -- without our informed consent -- being invaded (both within and without) with an array of biosensors, transmitters, routers, and actuators that are gathering the data which our lives generate as well as re-directing the energy that is associated with such data generation. As a result, that data can be used (and is being used) in ways that are not necessarily in our interests.

4.974 - Collecting and processing such data (perhaps using the aforementioned sorts of pattern- and relationship-discovering transformer mathematics to which Suleyman is drawing attention in The Coming Wave) is what is done in places like Bluffdale (also known as the Intelligence Community Comprehensive National Cybersecurity Initiative Data Center) in Utah and Pine Gap in Australia (which originally was sold as a space research facility but is, in reality, a CIA operation).

4.98 - Satellite systems (both large and small), as well as a multiplicity of CCTV networks (while China has more total CCTVs than America, America has more CCTVs per capita than China does), smart street-light standard systems (which are able to issue directed energy radiation for both lethal and non-lethal forms of active denial concerning anyone who colors outside the prescribed lines of social credit), along with social media platforms, CBDCs (Central Bank Digital Cash), medical technology, and so-called educational institutions are all streaming information (often using 5G technology) into central Bluffdale-like facilities that can, among other things, be used to create Digital Twins for purposes of surveillance, control, as well as remote physiological and cognitive  tinkering (such as experienced by Targeted Individuals). In addition, transformer technology also enhances the capacity of authorities to encode and decode the data that is being captured through not only all of the foregoing mediums but, as well, is being captured in conjunction with the DNA of people, and, all of the foregoing is accessed and used -- rent free and without informed consent – according to the likes and dislikes of the people who have been collecting and storing such data.

4.99 - The author of The Coming Wave is likely to claim that, in his own way, he has issued warnings about many of the foregoing considerations – indeed the aforementioned book would seem to offer considerable evidence to this effect. Yet, via AlphaGo, DeepMind, Google, and Inflection, he has continued -- in major, and not just in minor ways -- to enable, and develop enhancements concerning, the very things about which he, supposedly, is warning us, and one has difficulty not perceiving this dichotomy as a case of someone wanting to have his cake (integrity) but eating it as well.

5.0 - Someone once defined an addict as someone who will steal your wallet and, then, be willing to spend time trying to help you find the missing item. There are elements of the foregoing kind of addiction that are present in many of the dynamics which are associated with technology.

5.1 - Certain aspects of existence are taken from people via technology, and, then, technocrats (using technocracy) seek to help people try to find what has been taken from them even though what has been taken by technology is not recoverable by means of either technocracy or technology (The Technological Society by Jacques Ellul provides some very profound insights into some of what is being lost via technology). Doubling-down, or tripling-down, or n-tupling-down on the issue of technology will never provide a way of resolving the underlying issue, but, to a large extent, will merely exacerbate that problem.

5.2 - In part, serious addiction is a function of becoming embedded in a variable, intermittent reinforcement schedule. Research has shown that the most difficult addictions to kick (such as gambling, drugs, sex, shopping, and politics) are those that emerge in a context of reinforcements which are not always available but come intermittently and in unpredictable ways so that one is constantly looking (even if only subconsciously) for the next fix, yet, never knowing when one’s yearning will be rewarded while being ever so grateful and relived when it does show up.

5.21 - Addiction is also a problem because we often never quite understand how we became addicted in the first place. The root causes of addiction are often caught up in some combination of emotions (combinatorics of another kind) such as: Fear, anxiety, ambition, terror, anger, sadness, arrogance, jealousy, greed, curiosity, contempt, a sense of exceptionalism, unrequited love, hatred, bravado, concern, thwarted expectations, defiance, frustration, conceit, revenge, boredom, ennui, pride, disappointment, hope, shame, guilt, competitiveness, desire, confusion, and/or self-doubt which -- however temporarily -- become soothed by the distraction provided by some variable, intermittent schedule of reinforcement.

5.211 - However, if the emotional turmoil that is present in addiction is examined, inquiring minds often have difficulty trying to figure out just what set of emotions are being reinforced by the distraction which addictive behavior brings. From time to time, addicts do explore their condition, only because addiction is not necessarily enjoyable (though it can be, up to certain tipping points, pleasurable in a twisted sort of way), and, as a result, the addicted sometimes look along the horizons of life for signs of an off-ramp. Failure to identify and resolve the underlying problem or problems tends to provide the addicted with additional reasons for continuing on in the same, addictive manner.

5.22 - Soon, the foregoing sorts of emotions come back to haunt us. Those emotions are accompanied by rationalizations and defenses which seek to justify why addictive behavior is necessary.

5.221 - Before we realize what is happening, we have become habituated to the cycle of emotional chaos, justifications/defenses, variable intermittent reinforcement schedule, and distraction. Consequently, removing ourselves from such a cycle becomes very inconvenient on so many levels.

5.222 - Addiction is caught up with fundamental existential themes. Issues of identity, purpose, meaning, essence, and potential become mysterious, forceful currents which sweep through phenomenology in strange, surrealistic, and elusive ways.

5.223 - Symptoms of: Derealization, depersonalization, dissociation, and devolution (the ceding of one’s agency to the addiction) become manifest. The center does not hold.

5.23 - A dimension of psychopathy also enters into the foregoing cycle. This is because, on the one hand, when an individual becomes entangled in the web of addiction, that person tends to lose compassion and empathy for other people and, as a result, such an individual discontinues caring how one’s actions are adversely affecting other individuals (known or unknown), and, in addition, like psychopaths, addicted individuals become more and more inured and indifferent to the prospect of having to lie in conjunction with different dimensions of life, especially in relation to opportunistic forms of exploiting situations that serve one’s addictive purposes.

5.24 - The containment problem is, in essence, an issue of addiction. The pessimism aversion -- mentioned by the author of The Coming Wave -- that is associated with the containment problem is not necessarily about not wanting to look at the downside of technology per se but, rather, such aversion might be more about not wanting to look at the role which we play in it.

5.25 - Perhaps, as Walt Kelly had the character, Pogo, say: “We have met the enemy, and he is us.” Confronted with such a realization, slipping back into the stupor of addiction – and calling it something else – seems the better part of valor.

5.30 - The Coming Wave proposes a ten-part program which the author believes might – if pursued collectively, rigorously, and in parallel with one another -- have an outside chance of providing the sort of interim containment needed that would be capable of sufficiently protecting society to avoid complete catastrophe in the near future and which also would buy the time needed to strengthen and enhance such interim steps to avoid long-term disaster. Suleyman indicates that the world in its current state cannot survive what is coming, and, therefore, the steps that he proposes are intended to offer suggestions about how to transform the current way of doing things and become more strategically and tactically proactive in relation to the task of containing technology by making it more manageable.

5.31 - The author of The Coming Wave indicates there is no magic elixir that will solve the containment problem. Suleyman also states that anyone who is expecting a quick solution will not find it in what he is proposing.

5.32 - Given that the notion of a quick fix is, according to Suleyman, not possible, then, this tends to lead to certain logistical problems. More specifically, if time is needed to solve the containment problem, then, one needs to ask whether, or not, we have enough time to accomplish what is needed to get some sort of minimally adequate handle on the problem?

5.321 - Time, in itself, is not the only resource that is required to provide a defense that will be capable of dissipating the wave which is said to be coming. However, some might wish to argue that time already has run out because what is allegedly coming is already here since considerable evidence exists indicating that such mediums as AI, synthetic biology, nanotechnology, directed energy weapons, weather wars, mind control, and robotics are currently beyond our capacity to manage or prevent from impacting human beings negatively.

5.33 - Beyond time, there is a logistical need for some form of governance, organization, institution, or the like which would be able to take advantage of the resource of temporality and, thereby, generate responses that would be effective ways of helping to contain technology or stem the tide, to some extent, of the coming. Unfortunately, government, educational institutions, the media, legal systems, medicine, corporations, and international organizations have all been subject to regulatory capture by the very entity – namely, technology – which is supposed to be regulated, and, therefore, even if there were time (which there might not be) to try to do something constructive with respect to the containment issue, identifying those who would have the freedom, ability, financial wherewithal, authoritativeness, trust, and consent of the world to accomplish such a task seems problematic.

5.34 - According to the author of The Coming Wave, the first step toward containing technology is rooted in emphasizing and developing safety protocols. Such considerations range from, on the one hand: Implementing ‘boxing’ techniques (such as Level-4 Bio-labs and AI-air gaps) that supposedly place firewalls, of sorts, between those who are working on some facet of technology and the general public, to, on the other hand: Following more than 2,000 safety standards which have been established by the IEEE (Institute of Electrical and Electronics Engineers).

5.35 - Suleyman admits that the development of such protocols in many areas of technology is relatively novel, and, consequently, underfinanced, underdeveloped, and undermanned. For example, he notes that while there are more than 30,000 to 40,000 people who are involved in AI research today, there are, maybe, only 400-500 individuals who are engaged in AI safety research.

5.351 - Therefore, given the relatively miniscule number of people who are engaging in research concerning AI safety, one wonders who actually will be actively involved, in an uncompromised fashion, with not only regulatory oversight in relation to safety compliance issues but also will have meaningful powers of enforcement concerning non-compliance. Moreover, while Suleyman states that safety considerations should play a fundamental role in the design of any program in technology, and while this sounds like a very nice idea, one has difficulty gauging the extent to which technologists are taking this kind of a suggestion to heart.

5.4 - A second component of Suleyman’s containment strategy involves a rigorous process of being able to audit technology as the latter is being developed and deployed. Everything needs to be transparent and done with integrity.

5.41 - Traditionally, such auditing dynamics have met with resistance in a variety of venues. For instance, both nuclear and chemical weapons research programs have been resistant to outside people monitoring what is being done, and this problem has carried over into many areas of biological research as well.

5.411 - In addition, for proprietary reasons, many companies are unlikely to open up their products to various kinds of rigorous auditing processes. Furthermore, many governmental agencies which supposedly have the sorts of auditing responsibilities to which the author of The Coming Wave is alluding often suffer from regulatory capture, and those sorts of auditing processes are more akin to rubber-stamping assembly lines than to sincere attempts to fulfill fiduciary responsibilities to the public.

5.42 - Suleyman mentions the importance of working with trusted government official in relation to auditing technology. He also talks about the significance of developing appropriate tools for assessing or evaluating such technology.

5.421 - Yet, he indicates that such tools have not yet been developed. Furthermore, one wonders how one goes about identifying who in government can be trusted and, therefore, would be worthy of co-operation in such matters.

5.4211 - Trust is a quality that must be earned. It is not owed.

5.43 - Suleyman ends his discussion concerning his first two suggestions for working toward containing technology – namely safety and auditing protocols -- with a rather odd observation. On the one hand, he stipulates that such protocols are of essential importance, and, then, on the other hand, he proceeds to indicate that establishing such protocols will require something that we don’t have – and, that is time.

5.431 - If the time necessary to develop and implement safety and auditing procedures is not available, then, why mention those procedures at all? Suggestions which have no chance of being implemented in a timely fashion are not really part of any sort of practical, plausible containment strategy, and, so, Suleyman’s containment strategy goes from ten elements down to eight components – an example, perhaps, of how technologists often don’t look sufficiently far into the future to understand that what is being done at one time (say, during a discussion of the first two alleged components of a containment strategy) has the potential to create problems (e.g., doubt, skepticism, trust) for what is done later (say, discussion the next eight components of an alleged containment strategy).

5.44 - The third facet of Suleyman’s containment strategy revolves about the issue of chokepoints – that is, potential bottlenecks in economic activity that can be used to control or slow down technological development, implementation, or distribution. He uses China as an example and points out how core dimensions of AI technological activities in that country can be shaped, to varying degrees, through limiting the raw materials (such as advanced forms of semiconductors) that can be imported by China.

 5.441 – He, then, describes how America’s Commerce Department placed controls and restrictions concerning various semiconductor components that might be either sold to China or be repaired by American companies. These export controls served as chokepoints for Chinese research into AI.

5.442 - Toward the latter part of this discussion concerning the issue of chokepoints, the author of The Coming Wave indicates that such controls should not be directed against just China but should be applied to a wide variety of cases that involve slowing down, shaping, and controlling what takes place in different places around the world. What he doesn’t say is who should be in charge of this sort of chokepoint strategy, or what the criteria are for activating such chokepoints, or who gets to establish the criteria that are to be used for deciding when checkpoints are to be constructed, and on the basis of what sorts of justification.

5.45 - The notion of a chokepoint is quite clear. What lacks clarity, are the logistical principles which are to surround the notion of chokepoints that will allow humanity to effectively and judiciously contain technology across the board irrespective of country of origin.

5.451 - The foregoing notion of chokepoints that can affect the development of technology everywhere has the aroma of one-world government. However, the substance of such a notion is devoid of concrete considerations that can be subject to critical reflections that might indicate whether, or not, they can be reconciled with everyone’s informed consent.

5.5 - The fourth element in the containment strategy of Mustafa Suleyman has to do with his belief that the creators of technology must be the ones who should be actively involved in the containment process. This seems a little too much like the idea of having foxes guarding the hen house.

5.51 - Why should anyone trust the idea that the people who have had a substantial role in creating the problem in which humanity finds itself should be anointed as the ones who are to solve that problem? Contrary to the claims of many technologists, technology has not been able to solve many of the problems that have arisen in conjunction with various modalities of technology, anymore than pharmaceutical companies have been able to solve the problems posed by the so-called side-effects that are associated with their drugs and treatments (Side-effects are not side-effects rather they are one of the possible effects of a given drug that have undesirable rather than desirable consequences.).

5.511 - For example, synthetic forms of plastics (e.g., Bakelite) were invented more than a hundred years ago (1907). Due to the resistance of such substances with respect to being biodegradable, they are, now, not only being found in bottles of water in the form of millions of micro-particles and nanoparticles, but, as well, they are adversely affecting every level of the food-chain (e.g., plastics have been shown to be disruptors of endocrine functioning), as well as occupying 620,000 square miles of ocean waters to the detriment of sea life in those areas, so, one wonders where the technological solutions to the foregoing problems have been hiding all these many years.

5.52 - The author of The Coming Wave claims that the critics of technology have an important role to play, but, then, adds that nothing such critics say is likely to have any significant impact on the containment issue. If true, perhaps, this is because technologists often have proven themselves to be arrogantly indifferent to, and uninterested in, what some non-technologists have been trying to say about technology for hundreds of years … apparently believing that only technologists have the requisite insight concerning such issues.

5.53 - Suleyman wants technologists to understand that the responsibility for solving problems associated with technology rests with technologists. Notwithstanding such considerations, one wonders what the responsibilities of technologists are to the people who are injured from, or who die as a result of, their technologies.

5.531 - Responsibilities which are unrealized are empty promises. Consequently, one has difficulty understanding the logic of what is being proposed – namely, if such fiduciary responsibilities continue to go unfulfilled, then how will technologists have much of an impact on the containment issue?

5.54 - The author of The Coming Wave notes that over the last ten years there has been an increase in the diversity of the voices that are participating in discussions concerning technology. However, broadening the range of voices is meaningless if the people with power are unwilling to sincerely listen to, and act upon, what those voices have to say.

5.541 - He indicates that the presence of cultural anthropologists, political scientists, and moral philosophers has been increasing in the world of technology. However, he doesn’t specify how such a presence is contributing to the containment of technology.

5.55 - During his discussion of the fifth component of the containment strategy, Suleyman suggests that profit must be wedded to both purpose and safety but states, in passing, that attempts to try to do this have been uneven. For example, he refers to an “ethics and safety board” that he helped to establish when he worked at Google which discussed issues of ethics, accountability, transparency, safety, and so on, and, yet, the activities of that board never led to any actual changes at Google. The author of The Coming Wave also mentions an AI ethics advisory council of which he was a part and that had some principled and laudatory goals, and, yet, just a few days after its announced existence, the board became dysfunctional and dissolved.

5.56 - He often has been quite successful in getting conversations started. However, he has not been very successful in finding a way to translate those conversations into concrete changes in corporate policies that are able to contain technological development in any meaningful or significant fashion. 

5.57 - Finally, Suleyman introduces the idea of B Corporations which are for-profit commercial entities that also are committed to various social purposes, of one kind or another, which are built into the activities of the structure of the company. He feels that such experimental commercial structures -- which he claims are becoming quite common -- might be the best hope for generating policies that could work their way toward actively addressing containment issues.

5.71 - However, having a social perspective can mean almost anything. To be sure, such corporations want to have an impact on society, but they are inclined to shape the latter according to the company’s perspective.

5.711 - Consequently, one has difficulty discerning how such an orientation will necessarily lead toward containment issues except to the extent that the company will want technology to work in the company’s favor rather than in opposition to its business interest. Therefore, although such a company might have an interest in containing technology accordingly, this approach is not necessarily a serious candidate for containing the kind of coming wave to which Suleyman is seeking to draw the reader’s attention.

5.8 – There seems to be an element of magical thinking in many of Suleyman’s suggestions. In other words, he often gives the impression that merely raising a possibility is as good as if such a suggestion actually came to fruition -- as if to say: ‘Well, I have done my part (i.e.,  I am trying to start, yet, another conversation) – without apparently, wondering why such conversations don’t tend to go anywhere that is remotely substantial.’

6.0 – Component six of Suleyman’s ten-part strategy for containment has to do with the role of government. In effect, he argues that because nation-states (apparently, preferably liberal democracies) traditionally have had the task of controlling and regulating most of the dynamics of civilized society (such as money supplies, legal proceedings, education, the military, and policing operations), then, the government will be able to help with the task of containment.

6.1 - Not once does the author of The Coming Wave ever appear to consider the possibility that government might be an important part of the problem rather than an element in any possible solution. For example, he doesn’t seem to understand that the federal government, via the Federal Reserve Act, has ceded to private banks the former’s constitutionally-given, fiduciary responsibility for establishing and regulating the process of supplying money.

6.2 - In addition, he doesn’t appear to understand (and, perhaps having been brought up in England he can be forgiven for this oversight) that almost as soon as the American Constitution had been ratified, the warning of Benjamin Franklin was forgotten. More specifically, when Franklin had been asked (following the 1787 Philadelphia Constitutional Convention)  what kind of government the constitutional document gave to the people of America, he is reported to have responded: “… a republic if you can keep it”.

6.21 - Well, Americans were not able to keep it. Therefore, the qualities that might have made such a Constitution different – namely, the guarantee of republicanism -- was largely, if not entirely, abandoned and emptied of its substance.

6.212 - Constitutional republicanism has nothing to do with the Republican Party – or any other party. This is because political parties are actually a violation of the principle of non-partisanship … a principle which plays an important role in the notion of republicanism, a 17th century Enlightenment moral philosophy.

6. 2121 - As a result, the Congressional branch has, for more than two hundred years sought to, in effect, pass legislation that enabled different political, economic, and ideological perspectives to assume the status of religious-like doctrines or policies. Consequently, all such legislative activities constitute contraventions of the first amendment constraint on Congress not to establish religion.

6.21211 - In addition, the judicial branch became obsessed with creating all manner of legal fictions and called them precedents. Moreover, the executive branch began to look upon itself as being imperial in nature and, therefore, worthy of dictating to the peasants.

6.22 - The author of The Coming Wave wants government to take a more active role in generating “real technology” – whatever that means. He also wants the government to set standards, but, hopefully, this does not mean that: (1) agencies like NIST (National Institute of Standards and Technology) will get to reinvent the principles of engineering, physics, and chemistry as it did following 9/11; or, (2) that the NIH (National Institute of Health) will get to reinvent the sciences of molecular biology, biology, and biochemistry as it did during the HIV causes AIDS fiasco or the mRNA travesties to which COVID-19 gave rise; or, (3) that the FCC will continue to be enable to ignore substantial research that 3G, 4G, and 5G have all been shown to be responsible for generating non-ionizing radiation that is injurious, if not lethal, to life; or, (4) that the FDA and the CDC will get to continue to allow themselves to be captured by the pharmaceutical industry and create standards which are a boon to that industry but a liability for American citizens; or, (5) that DARPA and BARPA will get to run experiments in mind-control and synthetic biology that can be used by the government for population control; or, (6) that the FAA will continue to enable people like Elon Musk and Jeff Bezos, as well as the purveyors of chemtrails, to fill the sky with hazardous materials that, in the interim, are making possible the potential surveilling, radiating, and poisoning of the people of the world without the informed consent of the latter.

6.3 - Suleyman also wants government to invest in science and technology, as well as to nurture American capabilities in this regard. He is very vague about the precise nature of the sort of science and technology which the government should invest in and nurture, and, as a result, entirely avoids the issue of just how government is supposed to contain technology … contain technology in what way and for what purposes and to whose benefit and at what costs (biological as well as financial)?

6.4 - The author of The Coming Wave contends that deep understanding is enabled by accountability. However, he doesn’t indicate: What kinds of understanding should be held accountable, or who gets to establish the criteria for determining the nature of the process of accountability, or what justifies either way (i.e., the understanding or the accountability) of engaging technology.

6.5 – Suleyman ends his discussion concerning the role that is to played by government within his proposed ten-part strategy by stipulating that no one nation-state government can possibly resolve the problem of technological containment. The foregoing perspective – even though it might be correct in certain respects – begins to reveal some of the reasons why people like Yuval Noah Harari and Bill Gates – both of whom have been pushing the notion of one-world government -- think so highly of Mustafa Suleyman’s book.

7.0 – Component 7 of the containment strategy which is being outlined in The Coming Wave has to do with the notion of pursuing international treaties and establishing global institutions to address the technology issue. He mentions, in passing, the polio initiative that spread out across the world as an example of international co-operation, but he fails to mention the many adverse reactions and lives that were lost in a variety of countries as a result of that polio initiative.

7.1 - Suleyman describes groups like Aum Shinrikyo as being bad actors that could arise anywhere, at any time, and, therefore, there is a need to constrain those sorts of groups from gaining access to technology. What he doesn’t appear to consider is the reality that many nation-states, foundations, NGOs (non-governmental organizations), and organizations also have the capacity to be bad actors.

7.2 - What are the criteria that are to be used to differentiate between good actors and bad actors?  What justifies the use of such criteria? Who gets to decide these issues on the international stage?

7.3 - The United Nations is an organization that allows several hundred countries to, more or less, be held hostage by the permanent members of the Security Council. However, even if those permanent members did not have veto power, I see no reason for trusting the countries of the world to make the right decisions when with respect to placing constraints on who are “good” actors or “bad” actors.

7.4 - Truth and justice are not necessarily well-served by majority votes and representational diplomacy. Nor are truth and justice necessarily well-served when bodies like the Bank of International Settlements, W.H.O., or the World Economic Forum are let loose to impose their dictatorial policies on people without the informed consent of those who are being oppressed by such bodies.

7.5 - The author of The Coming Wave believes that the present generation is in need of something akin to the nuclear treaties that were negotiated by a previous generation. He fails to note that almost all aspects of those nuclear treaties have now fallen by the wayside or that even when such treaties were still operational, the United States, England, France, China, Russia, and Israel still had enough nuclear weapons to destroy the world many times over … so much for containment.

7.6 - The conventions or treaties supposedly governing chemical, biological, and toxic weapons are jokes. The dual-usage dimensions of those conventions/treaties allows so-called preventative research to be used as a basis for creating offensive weapons, and since there is no rigorous process of compliance-verification, no one really knows what is being cooked up in this or that laboratory (public or private).

7.7 - Suleyman touches on the idea that there should be a World Bank-like organization for biotech. The World Bank, along with the International Monetary Fund, served as agencies that induced corrupt or ignorant leaders to indebt their citizens in order to provide certain companies with a ‘make-work-subsidization-welfare-for-the-rich’ program to enable such companies and their supporters to get richer and the people of the world to get poorer.

7.71 - The foregoing is not my opinion. It gives expression to a person – namely, John Perkins -- who operated from within the inner sanctums of the foregoing governmental-corporate scam activities, and, now, Suleyman wants to help biotech develop its own variation on the foregoing technological confidence game of three-card Monte.

7.8 - During the course of some of the discussions that appear in The Coming Wave, various references are made to international treaties concerning climate change and how those sorts of agreements and forms of diplomacy serve as good models for how to proceed with respect to negotiating technological containment. However, anyone who knows anything about the actual issues involved in climate change – and, unfortunately Suleyman seems to be without a clue in this respect – knows that the idea of global warming is not a credible theory.

7.81 - In fact, the notion of global warming is so problematic that one can’t even call it scientific in any rigorous way. Yet, the level of “insight” (a euphemism) which many individuals have who have drunk the Kool-Aid concerning this issue (Suleyman, apparently, being one of them) is so woeful that Al Gore can win an Oscar as well as a Nobel Prize for promoting a form of ignorance that helps to enable carbon-capture schemes to be realized (and these schemes are nothing more than ways of helping to fill-up the off-shore bank accounts of opportunistic venture capitalists, exploitive corporations, and nation-states with questionable morals), while also providing a certain amount of conceptual misdirection to cover the financial, political, medical and economic sleight of hand that is being used to construct 15-minute cities into which people are to be herded so that, in one way or another, they can be better controlled.

8.0 – The author of The Coming Wave indicates in the 8th installment of his ten-point strategy for containing technology that we must develop a culture of being willing to learn from failure. He uses the aviation industry as an illustrative example of the kind of thing that he has in mind, noting how there has been such a strong downward trend in deaths per 7.4 billion boarding-passengers that there often are intervals of years in which no deaths are recorded, and Suleyman attributes this impressive accomplishment to the manner in which the airline industry seeks to learn from its mistakes.

8.1 - Although the recent incidents involving Boeing happened after The Coming Wave was released, one wonders how Suleyman might respond to the 2024 revelations of two whistleblowers – both now dead under questionable circumstances – concerning the relative absence of best practices in the construction of certain lines of Boeing airplanes (e.g., 737 MAX) … substandard practices that had been going on for quite some time. Or, what about the practice of mandating mRNA jabs for its pilots, many of whom are no longer able to pilot planes because of adverse reactions in conjunction with those mandated jabs and some of whom were involved in near tragedies while engaged in piloting planes as a result of physical problems which arose following the mandated jabs? Or, what about the laughable – pathetic really – way in which the airline industry and National Transportation Safety Board handled – perhaps “failed to handle” might be a more accurate phrase -- the alleged events of 9/11 in New York, New York, Washington, D.C., and Shanksville, Pennsylvania? (The interested reader might wish to consult my book: Framing 9/11, 3rd Edition; or, Judy Wood’s book: Where Did the Towers Go? Evidence of Directed Free-Energy on 9/11; or, the work of Rebekah Roth, an ex-flight attendant.).

8.2 - The fact that some of the time the airline industry is interested in learning from its mistakes is encouraging. The fact that some of the time the airline industry seems disinterested in the truth concerning its mistakes is deeply disturbing.

8.3 - The NSA doesn’t seem to learn from its mistakes. This is the case despite the attempts of people such as Bill Binney (2002), Russ Tice (2005), Thomas Tamm (2006), Mark Klein (2006), Thomas Drake (2010), Chelsea Manning (2010), and Ed Snowden (2013) to provide information about those mistakes.

8.4 - When problems surface again and again (as the foregoing instances of whistleblowing indicate), then, they no longer can be considered to be mistakes. Such activities constitute policy, and the only thing that the NSA learns from its “mistakes” are new strategies that might help it not get caught the next time.

8.5 - For more than a decade the CDC hid evidence that thimerosal (an organomercury compound) was, indeed, implicated as a causal factor in the onset of autism among Black youth who received the MMR vaccine before 36 months. Dr. William Thompson who was employed as a senior scientist by the CDC made a public statement to that effect in 2014.

8.6 - The CDC, the FDA, and the NIH have all sought to hide evidence which indicates that the mRNA jabs are neither safe nor effective and that this information was known from the beginning of, if not before, Operation Warp Speed. Medical doctors, epidemiologists, and researchers too numerous to mention have all brought forth evidence which exposes what those agencies have done, but a few starting points in this regard involve the work of: Drs. Sam and Mark Bailey, Andy Kaufman, Stefan Lanka, Thomas Cowan, Ana Mihalcea, Charles Hoffe, and Vernon Coleman, as well as the work of Mike Stone and Katherine Watt.

8.7 - Contrary to the hopes of Mustafa Suleyman, most corporations, institutions, media venues, academic institutions, and governmental agencies are not inclined to endorse a policy of “embracing failure.” One could write many histories testifying to the truth of the foregoing claim, and one disregards this reality at one’s own risk.

8.8 - The author of The Coming Wave speaks approvingly concerning the work of the Asilomar conferences concerning recombinant DNA that take place on the Monterey Peninsula in California. These gatherings began in 1973 when Paul Berg, a genetic engineer, started to become concerned about what the ramifications might be with respect to something that he had invented, and, as a result, he wanted to try to start a conversation with other people about the sort of principles that should be established concerning that kind of technology.

8.81 – While one can commend Paul Berg for wanting to do what he did, nonetheless, the inclination toward exercising caution apparently only came after he had invented that about which he subsequently became concerned.

8.82 - Over time, the conferences came up with a set of ethical guidelines that were intended to guide genetic research. The results of those conferences raise at least two questions.

8.83 - First, notwithstanding the fact that guidelines have been established concerning genetic research, can one necessarily assume that everyone would agree with those guidelines and/or the principles underlying them? Secondly, even if one were to assume that such guidelines were perfect in every respect – whatever that might mean – what proof do we have that government agencies such as DARPA, BARPA, and the NIH (especially in conjunction with research that has been farmed out to, say, the Wuhan Institute) are conducting themselves in accordance with those guidelines and principles?

8.9 - Suleyman notes that the medical profession has been guided by the principle: “Primum non nocere – first, do no harm”. However, the fact is that doctors in different states, localities, and countries actually operate in accordance with a variety of oaths, none of which necessarily bind those medical professionals to the idea that: ‘first, they must do no harm.’

8.91 - Notwithstanding the foregoing considerations, even if doctors were required to take such an oath, what does it even mean? Wouldn’t the meaning of that moto depend on the criteria one uses to identify harm, or wouldn’t the theory of medicine to which one subscribes dictate what one might consider the nature of wellbeing -- and, therefore, harm -- to be?

8.92 - According to some measures, medicine is the third leading cause of death in the United States. If one throws in the issue of diagnostic errors, then, according to a recent study: “Burden of Serious Harms from Diagnostic Error in the USA” by David E. Newman-Toker, et. al., medicine is the leading cause of death in the United States.

8.921 - We’re talking about between 500,000 and 1,000,000 deaths each and every year as a result of iatrogenic issues. The United States government has gone to war and destroyed whole countries for the latter’s alleged connection to less than 1/1000th of the foregoing number of casualties, and, yet, the medical industry does all manner of injury but not much happens to stop the carnage.

8.922 - Suleyman suggests that scientists need to operate in accordance with a principle like the idea of: “First, do no harm.” If the aforementioned number of deaths is any indication of what comes out of a system that pays lip service to such a principle, then, one might hope that scientists would be able to discover a principle which is more effective.

9.0 - When discussing the 9th component (people power) in his strategy for containing technology, Suleyman indicates that only when people demand change does change happen. This claim might, or might not, be true, but, as it stands, it is meaningless.

9.1 - The notion of “change” could mean any number of kinds of transition or transformation that will not necessarily be able to contain technology – which is the only kind of change that Suleyman has been exploring in The Coming Wave. What sorts of change should people demand that will effectively bring about the containment of technology and do so in the “right” way – whatever way that might turn out to be?

9.11 - More to the point, if people knew what sorts of change to demand in order to contain technology, then, one might consider the possibility that Suleyman has been wasting the time of his readers with his speculations because, apparently, the people might already know what sorts of change to demand. After all, he indicates that the people should speak with one voice concerning the alignment of different possibilities in relation to the theme of containment, but, apparently, he is leaving the specifics required to meet this challenge as a homework exercise that the people are, somehow, going to solve on their own because he really doesn’t specify what the nature of the alignment change should be that is to fall from their collective lips.

 9.2 - Earlier in his ten point strategy presentation (component 4), he indicated that while those who are not technologists can speak out with respect to technological issues, but, nonetheless, what they say will not stop the coming wave or even alter it significantly. Now, he is saying that the people need to speak with one voice, and if they demand change, then, change will happen.

9.21 - Both of the foregoing statements cannot be true at the same time. So, what are the people to do or not do?

9.3 - Throughout The Coming Wave, the author mentions the term “stakeholders” many times. However, one never gets the feeling that by using the term “stakeholders” he is referring to the people.

 9.31 - Almost invariably, Suleyman uses the term “stakeholder” to refer to: Corporations, technologists, scientists, universities, the medical industry, the police, nation-states, banks, the military, and/or international organizations.  Yet, how can one possibly deny that every single person on Earth is a stakeholder in an array of issues, including the containment of technology?

10.0 - The final pillar in Suleyman’s containment strategy has to do with grasping the principle that the only way through is to: Sort one’s way through the issue, and solve one’s way through the issue, and think one’s way through the issue, and tough one’s way through the issue, as well as co-operate one’s way through the problem of containment.

10.1 - According to the author of The Coming Wave, if all of the strategy elements which he has put forth are collectively pursued in parallel, then, this is how we find our way out of the difficulty in which we currently are ensconced. However, as some of the characters in the Home Improvement television series often said: “I don’t think so, Tim.”

10.2 - Suleyman believes that the solution to the technology containment problem is an emergent phenomenon. In other words, he believes that solutions to the containment problem will arise naturally and automatically when his ten component strategies are used in harmonious, rigorous, parallel conjunction with one another.

10.21 - Unfortunately, as has been indicated over the last 15 pages, or so, there are many serious problems inherent in every one of his ten components. While one can acknowledge that a number of interesting and thoughtful suggestions or possibilities have been advanced during the course of Suleyman’s ten-component strategy plan, nevertheless, as I have tried to point out in the foregoing discussion, all of those suggestions and possibilities are missing essential elements, and/or are embedded in a cloud of unknowing, and/or suffer from internal, logistical, as well as logical, difficulties.

10.22 - Moreover, above and beyond the foregoing considerations, there is one overarching problem with Suleyman’s ten-component strategy for containing technology. More specifically, he fails to understand that the containment problem is, in its essence, about addiction – an issue that, previously, was briefly touched upon in this document.

10.23 - We have a containment problem because people are vulnerable to becoming addicted to all manner of things – including technology. Furthermore, technologists have – knowingly or unknowingly -- played the role of drug dealers who use their products to exploit the aforementioned vulnerability in people for becoming addicted.

10.24 - Governments are addicted to technology. Politicians are addicted to technology. Corporations are addicted to technology. Education is addicted to technology. The entertainment industry is addicted to technology. Intelligence agencies are addicted to technology. Transportation is addicted to technology. Businesses are addicted to technology. The media are addicted to technology. Science is addicted to technology. The legal system is addicted to technology. The military and police are addicted to technology. Medicine is addicted to technology. Much of the general public is addicted to technology.

10.3 - Western society – and this phenomenon is also becoming established in many other parts of the world as well -- has become like the monkey anecdote about which Arthur Firstenberg talked and which has been outlined earlier. Society, collectively and individually, has placed its hand into the bowl of technology, grasped as much of the technology as its hand is capable of grabbing, closed its fist about the anticipated source of pleasure, and has discovered that it can’t remove what it has grasped from the technology-containing bowl.

10.4 - Society is caught between, on the one hand, wanting to hold onto the technology which it has grasped and, on the other hand, not being able to function properly as long as its hand is wedded to that technology. None of the components in Suleyman’s ten-point strategy – whether considered individually or collectively – addresses the foregoing problem of addiction.

10.5 - When the Luddites -- toward whom Suleyman is, for the most part, so negatively disposed -- wrote letters, or demonstrated, or smashed machines (but didn’t kill anyone), they were seeking to engage the owners in an intervention of sorts because the latter individuals were deep in the throes of addiction to the technology with which inventors (their suppliers) were providing them. The owners responded to those interventions as most addicts would – that is, with: Indignation; incomprehension; contempt; confusion; silence; opposition; resentment; rationalizations; defensiveness; rage; self-justification; obliviousness to, or indifference toward, the damage they were causing, and/or violence.

10.6 - The structural character of addiction is both simple and complex. The simple part is that it is rooted in a variable, intermittent pattern of reinforcement, whereas the complex aspect of addiction is, on the one hand, trying to figure out what dimension of one’s being is vulnerable to such a pattern of reinforcement, and, on the other hand, figuring out how to let go of what one is so deeply desiring, and, therefore, so desperately grasping in the bowl of technology.