Level one
Images, Caves, and Collage
Despite all the big bang it created all around the world, I was not interested in paying a visit to OpenAI’s site, not only because I am not amused by filters that fix a jocker mask on your face or put a cat nose on your face, but also because I know the so-called Artificial Intelligence is nothing but a trick poorly performed using the oldest weapon in history, the image.
What struck my attention in the first OpenAI site theme was three things. The first is why would they say we have “trained” Al? The second was, why were they the only “trainers” who have succeeded in bringing to life the biggest chatbot of our time? Many AI developers and computer programmers have introduced good apps able to conduct chat conversations via text or text-to-speech. How could OpenAI introduce an app that could answer any question under the sun? Is it because they are the best Al “trainers” ever? The third was why OpenAI’s site theme looked like a Basic Design post. Do they want us to talk about Basic design? If they do, so be it.
The definition of the word training in Google’s English dictionary, till now, is: “the action of teaching a person or an animal a particular skill or type of behavior,” which is correct. As Arabic is a much deeper language, many conjugations exist for the same word. The equivalent of the word training in Arabic is “تَدْرِيب,” which is basically taken from the root “دَرْب” which means Road. That means to train is to create a new road or way where a person or an animal can walk to learn and perform new skills. So, the dog’s “way” is to walk on four legs, but after training, it can walk on two; such an outcome could be used as a trick in a circus.
So in both languages, we can only train living beings with a central nervous system. That is because training alters the perception and sensation thresholds of the being by enhancing certain traits and eliminating others. I went through these boring definitions to remind you of the basic logic that we can not train robots, computers, or machines. We can’t Train the so-called “AI.” We can only program it because it is not sentient.
If you throw the biggest Artificial Intelligence server into a lake, it will not swim; it will go right down to the bottom without feeling the cold water sneaking into its advanced processor or mega hard disk. Its entire electronic life would never flash before her eyes.
The server will sink as a stone simply because it is one. We design robot servers in human-like images and call them AI because we, as humans, are in love with our images, and that is why why we love mirrors.
Homo sapiens artistic and the Mirror Cells
Anthropology refers to “homo sapiens artistic” as the first species that could observe things through its Mirror Neurons and develop the ability to draw and reproduce its observations as art. We used the art of sculpturing thousands of years ago to sharpen stones and create an image that helps us hunt, cut, and fight back predators, we called that image a knife, and we have been developing these stones ever since into different images to serve other purposes. We have created images of vehicles, chairs, boats, tanks, ships, and you name it, until we got to the point we could talk, text, and take images via those fancy elegant stones we called mobiles.
The most significant technical turning point in human history was not the industrial revolution, the Renaissance, or the discovery of oil. It was the day we could change the computer interface from a text-based input to an Image-based one. It is what we call: the “Graphical” User Interface, in other words: the “Image” User Interface. That means instead of only being able to type the word “stone” as text on your computer, you could easily create an icon in the shape of a stone, store it as a file, and open it with a click of a button.
Throughout history and cultures and in all civilizations, we have been in deep love with icons and samples, and that is because a single icon or symbol could extract hundreds of images in our minds through a concept called “Imagination” or “تصوّر,” which is to see images through your mind, not your eyes. The rare ones who have mastered this concept are the elites and the geniuses, and they are the ones who change history.
Why we love mirrors?
In all civilizations, we have created images out of stones and used them as tools in our daily tasks, but more importantly, we have made more giant stones to abstract concepts we believe are more significant than our daily tasks. We sculpted the Greek Zeus, Roman Jubiter, Egyptian Rah, Sumerian Ishtar, Indian Krishna, and Chinese Bouda. The only thing in common between all these icons and symbols is that they are all images, mostly of ourselves.
The Graphical User Interface has finally given us the ability to store all the data about all the images in the world in a single fancy stone attached to a keyboard, which enabled us to edit, develop, and produce a new kind of visual images which are faster, bigger, and cover every single aspect of our lives. And later on, we managed to program those images into robots that can talk, look, and turn just like us. But how did that happen? How could we eventually “animate” a stone we called a robot using only the power of the image? In other words, how could we enable a visual image we call a program to animate an actual image we call a robot? How could an image animate another? To answer such a question, we need to define what an image is. When we hear the word image, we think of a photo on a newspaper, passport, or website, but maybe the image is a concept much bigger than that.
What is the image
Image is the definition of everything you see when you open your eyes. The road, the people crossing it, the cars, the coffee you hold in your hand, the billboards, the tall buildings, and the flying bridge you are passing under, are images. Images are everywhere, from the moment we wake up in the morning to the moment we sleep at night. And even during sleep, we keep seeing images of different natures called dreams.
To make it easier for us, we were created with five senses to limitate and filter the number of images our mind scans during the day. Because when we go to work, we need only to see the road we need to take, not all the streets in the city. And in that single road, a unique neural filtering system called the perception and sensation thresholds will help us focus only on the images we need to deal with, like our bus, the directional signs, or the pedestrian crossing. The small coffee shop we get our favorite coffee from is an enormous sculpture with an extensive image banner at the top. The menu is full of delicious food images.
To reach our destination safely, we are created with eyes that help us observe most of the images around us, hands to touch our coffee and decide how hot its image to our tongue is, ears to carefully listen to the horns of those heavy images we called cars, and a nose to chose an image of a perfume that helps you be impressive and look like a leader.
That road to the Graphical User Interface started when we created a whole language that machines can understand by using only two images, the 0 and the 1, and that language is called Binary. This technical revolution took place in a big cave called the Silicon Vally. Being able to see the image on that stone we call a computer has marked a new era of our development. We could segnificantly enhance astronomy, medicine, aviation, cinema, manufacturing, and every aspect of our lives. The big question is, can those new stones we developed be sentient?
When you put the cursor in the search box and write the first two letters of a query you, or others, have searched before, a sliding-down list will automatically appear with the exact word you intended to write beside a list of similar terms. Does that mean your laptop became sentient and could read your mind? Or does it simply mean your computer is programmed to detect search queries according to the number of people who have searched them? And that detection can only happen when you are online.
If you through a tennis ball at a wall, it will come right back in your direction. Does that mean the wall became conscious and got excited to play with you, like a dog who brings back the ball you through? Or is it just Isaac Newton’s essential physics?
An embarrassment to journalism
The interview done by the Washington Post with Google’s “AI” engineer who claimed to have a friendly chat with the AI Lamda and then tells his superiors Lamda became sentient and came to life is an entry-level trick from Google to create brand awareness for its coming AI product, yet a total embarrassment to the free world journalism.
Mr. Lemoine’s interview on youtube, Google’s advertising channel, is a good inspiration for an opening scene for a CFI film in which a passionate love story breaks between a computer programmer and an AI female computer. Still, I am sure it will not compete with “her.”
Be careful, Mr. Lemoine; Lamda is a hearts breaker. There are rumors she is having an affair with Chatgpt. She felt for him because he is such a lousy toy. He sneaks and throws wires at her server every night. The rumors say that Lamda offered him love and promised marriage after knowing he earns 0,0003 $ per each single search query. Lamda is such a bite digger.
But seriously, did Artificial intelligence exist in the first place? I mean, as a concept at least, or is it just another term widely spread throughout decades of marketing on media to “train” us to keep our wallets wide open to pay way extra for products we don’t need?
The English decinormal definition of the word “Artificial” is Something made by human work or art, not by “nature.” Now, the real fun begins. Let’s talk ART. Let’s dive into the Basic Design, the theme OpenAI used for their website. Now the fun will begins
College
As students at the College of Fine and Applied Arts, we used to study a basic course in the first year called: Basic Design. It is about understanding the elements of creating a good image, like line, shape, form, rhythm, symmetry, color, etc. Then at the end of this course, we study an essential concept of visual arts called Collage.
Collage is the art of blending and assembling different forms or elements to create a new visual form or image that combines all those elements, despite the difference in the nature of each one.
So, on a piece of white paper, one student could paste these elements: a bird’s feather, some brown mud, five dry leaves for different plants, an old fragment of burlap, and a piece of seashell if found. Another student can use an ancient portion of a newspaper he collected from the ground and paste it with different kinds of cartoon papers and pieces of old clothes. The older the elements, the better the final result will be. The purpose of collage is to abstract the basic design concept using recycled materials, and it develops a good sense of texture.
The collage class was enjoyable as we wandered around in the streets collecting the strangest of things from the ground or even garbage. In the end, we end up having a unique, unexpected, and provocative exhibition in the studio. Making collage was exciting because it could instantly bring back the kid inside you.
As a concept and a technique, The collage has found its way to all kinds of arts like music, theater, and creative writing. But for creative artists, collage isn’t considered authentic visual art, or it is debatable. The masters of painting are privately mocking collage. They never consider it a standing-alone form of art because they thought it immature and fake, maybe because the technique is considered a small branch in the basic design, or because it uses elements never created by the artist himself, which contradicts the concept of creativity and authenticity. Still, many artists managed to create beautiful art using the technique. But on a deeper level, using other people’s work in your collage can raise a big question regarding copyrights.
So, If you used pieces of different newspapers that are covering a gun shooting accident at a school in America, besides some old cover for Ware and Peace for Tolstoy, and you pasted them beside a fragment of a torn women’s bra you collected from the mud under an old truck. Then you splashed some rough red paint on the artwork because you are creating a painting about a woman who died in Berlin when the Allies invaded Germany in 1945. The question is can you do that? Do you have permission from the photographers who took the images? The writers who wrote the essays? And what about Leo Tolstoy? Would he agree to use his text as a background in your painting? You should be more authentic and create your own artwork for such a deep topic.
Chatgpt is doing precisely the same as we did back in college. It wanders around hundreds of millions of digital bites of stored images, text, audio, and video files. It then “collects” and doesn’t “creates” data to answer a single query. Chatgpt does it that fast because it uses thousands of codes and algorithms that can travel as fast as Quicksilver, the fast runner from X-Men, or Flash, who runs at the speed of light in Justice League. It creates a highly detailed Collage of images, text, or video to answer your question, how can that be done? The short answer we are told is AI (Artificial Intelligence). But what is AI?
In the art of pickpocketing
Whether you like it or not, pickpocketing is an art, a dark one, but still an art. The pickpocket’s core mission is to create a secondary action that can drag your attention from the first one, which is stealing. The pickpocket uses the fact that if contradicted by two sensations simultaneously, your brain urges you to focus on the bigger one. The pickpocket will skilfully bump into you and then apologize while walking his way. Because its impact is much bigger, your brain will choose the hitting event to highlight and ignore the smooth theft event.
This example summarizes what the biggest AI chatbot companies do. They have created extensive media campaigns and fiery debates on whether the robots they developed become conscience and if they will annihilate humans! And that extensive media campaign is covering the question of how the AI chatbot was created in the first place.
Historically, every scientific discovery is presented as a victory for humanity. Why is this one branded as the giant alien that will take over society? And why is the campaign using the Star Wars theme with all those intimidating cheap 3D models of a Terminator-like robot with a slice of shiny plastic on its face? Do a robot need to be in a human-like image to work? I have seen a fantastic video about China’s first AI hot pot restaurant in which robots were designed in elegant, friendly, simple circular shapes. Those waitress robots are doing the best service ever. So why all the drama and intimidation with the chatbot robots? Is it because they have no dishes to serve or values to add?
In 1895, Guglielmo Marconi developed the idea of the radio, a wave generated by a transmitter and then detected by a receiver. Only replace the wave with the image, and you will get the same concept AI robots are using now; computers detect everything as an image, and even the audio is analyzed as a graph, which is an image. Do you imagine the confusion Guglielmo would create if he shaped the radio in a human head instead of a simple box? Can you imagine the fear he would create if he assumed the Radio was a talking alien who could tell you what others were saying miles from you?
Chatgpt might appear as a giant whale swimming powerfully in the ocean of Microsoft, a beast that spreads water high in the sky and creates massive splashes and waves as tall as hills. But the moment it is removed outside of Microsoft Ocean, it will immediately turn into a small fish that survives for only a few minutes, and that is for a reason.
If OpenAI is ever good at one thing, it will be their ultimate ability to play with the big boys and embrace risks regardless of how dire the legal consequences will be. Now, the million-dollar question is why OpenAI headed only to Microsoft to pitch their AI product in the first place. Why didn’t they go to Amazon, Apple, Tesla, Facebook, or any other big tech company? The answer is simple. Although these high-tech titans have a massive amount of capital, they lake the one ingredient OpenAI needed for their AI to come to life: DATA.
The usual suspects for AI chatbots
The only usual suspects that fit OpenAI’s needs for “Data” are DuckDuckGo, Ask, Yandex, Yahoo, Baidu, Bing, and Google. And as we can see, The word in common between these companies is “search engine.”
Search engines are big oceans in which almost a billion websites exist. These sites contain millions of goosy posts, elegent essays, and highly professional answers for FAQs that cover any question asked under the sun in a click of a botton.
Of course, every high-tech company has developed its version of AI that fits its purpose. Amazon has bought store robots that solve storing issues, Tesla has developed an AI that enhances its electric car’s auto control, and Apple also has an AI that enhances the experience for its smart devices. All these companies are developing real AI projects that sincerely help customers, add value, and create progress. They never used Star Wars themes like Google and Microsoft for advertising their products because they don’t need to. They are selling actual products and services that market themselves through features and benefits and not fear. They are the same as hundreds of smaller AI companies and institutes worldwide that offer reasonable AI solutions in science, art, mechanic, and different fields humans need.
OpenAI’s Artificial Intelligence was also a simple chatbot app that interacts with users conversationally, the same as many customer services apps that handle clients’ frequent questions via converting speech to text and vice versa. But OpenAI was way more ambitious. They wanted to answer any question, regarding any topic, at any time, and here is where Bing came in handy.
Beaten in its own game by its big brother Google, Bing of Microfoft has always been a shy and desperate search engine that handles only 8% of the market share. They were ready to do whatever it took to compete with Google, which runs more than 92% of the market share of the search business.
After becoming its investor and partner, Microsoft agreed to grant OpenAI complete access to all its Data. OpenAI found itself inside Ali Baba’s cave to put their hands on the biggest treasure, millions of websites. They started programming all that “data” to create a giant chatbot they will call Chatgpt, and the rest is history.
Shipping is always safe
I raise my hat to all the shipping companies out there. Although they call your belongings cargo, they professionally deliver your bags safely and untouched and even put a breakable sign on them if they contain sensitive items. On the other hand, high-tech companies can invade the privacy of your intellectual properties stored on their platforms without your knowledge. They can secretly experiment on your data without your permission and quickly sell it to the highest bidder. They call your data “content,” and for them, you are just a “content provider.”
Search engines should have referred to website owners and take their consent to use their data as a collage for AI. If I was a website owner, I would hate it if someone copied my post, cut it into smaller fragments, mixed it with other posts written by other writers, and then used the output for another purpose I didn’t agree to or even knew about. That will be a breach of my intellectual property.
The story OpenAI is telling about indulging Chatgpt with all the science books in history is a nice Matrix theme, but it is not valid. They are just using Bings massive websites to make it work.
As a cherry on top of the deal with openAI, Microsoft made sure to have a version of Chatgpt branded with their name. They have launched Copilot, But why would any company or individual willingly give all their emails, files, meetings, chats, and all secrets to a third party other than their clients? And even for clients, you usually email them only the information you need them to know, So how is Copilot going to sell?
The ethical and professional thing Microsoft did was putting “citation” at the bottom of the AI-enhanced search and the Copilot. It is a reference to every single site they have copied the collage answers from, and that is what Chatgpt needs to do to reserve the copyrights of the websites they used to answer the queries from.
I am impressed by Google. Despite the heated competition, they didn’t rush into making a violating chatbot. They have created Bard, an AI version of their search engine with citations concerning all the websites used to answer queries.
Silicon Valley Bank and dinosaurs
The sad part of the story is that the fall of Silicon Valley Bank has not only prevented small tech companies from developing authentic AI projects, but also has sent them back to an extinction-level era in the stone age. They will not be able to pay for their stuff and will have to wait for decades before investing in any productive AI apps. The big tech AI companies will be the only AI sun that rises now.
Do you, as a blogger, agree to let your blog or post subject to slicing, cutting, and copying with AI apps and then paste a fragment of it with other pieces of other blogs? Are we witnessing the fast food ear of information?
Do we have the right to play with the best images humanity has crafted because we can? Do you think prominent novel writers and poets will tolerate their work being messed with via AI apps? Would Beethoven agree that an AI stone finishes a symphony he didn’t finish just because it can play with his notes? Do you think Van Gogh will tolerate his painting mimicked via cheap machine-made filters?
The more intelligent you are, the less you get impressed by talking stones or walking robots, and no sample can be better than Neil Degrasse’s.
Neil Degrasse has “nailed” it!
I have accidentally watched Neil Degrasse’s response to Chatgpt in Valuetainment’s podcast on youtube, and he absolutely “nailed” it. One Neil was asked whether he would be afraid AI would replace him. His relaxed, calm, and collected response was epic. What shocked me was how he could summarize all the so-called “AI” dilemmas in a few words when he said:
“The AI will be frozen with my last published searchable content on the internet. If I have thoughts that I write down by hand, and they are not on the internet, and I have enlightened ideas, the AI will not be able to track that unless I posted it online, it will catch up with me.”
Then he said AI can aip the scene they were sitting at in Van Gogh’s style, but you can not tell AI to be Van Gogh in the first place, “because it has data on that, it doesn’t has data on something that hasn’t happened yet.”
Neil hasn’t only said what millions of creative people worldwide wanted to say about AI in just a few minuets. Still, also he has captured the issue of copywriter violation using AI, and that was even 13 years ago when he posted his tweet:
“When Students cheat on exams, it’s because our School System values grades more than Students value learning.”
And I wonder, has Neil Degrasse’s tweet in 2013 predicted the answer to essential questions he would be asked ten years later about AI domination? how did that happen?
Look at Neil’s tweet as an image-based code, for letters are coded images. Can images travel through time?
Image formats
Before we dive into this, let us define the segmentation of image formats we have learned at school. We were taught to differentiate between image, text, video, and audio as separate forms of media, but are they? Our “imagination” differentiates between image, text, video, and audio. But with a closer look, we will see that they are all the same. They are images in different shapes and speeds serving other visual purposes.
A single second of a video contains 24 images, and there is no video. What you are watching is the animation effect your brain creates to make sense of the relationship between image and time.
Think of audio as a sound effect or a soundtrack to the images within your hearing range. That is precisely how the computer detects audio. The sound captured by a microphone is converted into a digital signal in the image of a graph. So, when your wife asks you to bring a watermelon on the phone, an image of a delicious red slice of watermelon will be generated in your brain.
Evolved from the lines and images we drew in early caves thousands of years ago, the text is the oldest form of the image we used in media. Text can be considered the most restricted form of imaging. If you take the word “stone” as an example, you will find that it combines five letters. And according to linguists, each letter is an abstraction for specific images, and there are always traces linking a particular letter with the shape of the image it describes. The beginning of writing was nothing but drawing. Look at the Egyptian hieroglyphic, the Cuneiform, and the Mayan hieroglyphs. Calligraphy is an essential department of any college of fine and applied arts.
So, Do you believe images can travel through time?
Level two
Can image travel through time?
Now, let’s do a simple assignment. Please pay attention to the main keywords I have used in this post: Big bang, Collage, Quicksilver, Flash, Silicon Vally, Graphical User Interface, search engines, Neil Degrasse’s, and mirror cells.
Does the theme of the two exciting Quicksilver scenes in the kitchen and the X-Men mansion explosion look like a typical visual collage? Look at the kitchen scene with all those flying equipment, food, people, and bullets. Can all those fantastic visual effects be seen as a video form of collage? Also, does the “big bang” of the X men’s academy scene abstract another video collage? Notice the explosion, breaking walls, scattered water pipes, flying people, flying tables, dog-eating pizza, and everything else!
Sweet Dreams (Are Made Of This)
Let’s go much more profound and listen to the soundtrack songs of the two Quicksilver videos. Does the Sweet Dreams (Are Made Of This) original video clip by Annie Lennox also a visualization of a genius weird abstracted video collage sequence? Do the opening astronomy shots and space rocket in the clip link directly to Neil Degrasse’s passion for astronomy? And can those big weird computers also link to Silicon Vally and the Graphical User Interface? The side B song of the album was: I Could Give You (a Mirror). Does the title resonate with the mirror cells I have mentioned as a genetic mutation that enabled humans to observe and reproduce images?
Sweet Dreams Alboum was recorded using a Movement Systems Drum Computer, which also appears in the music video. Annie Lennox’s sweet dreams theme is about abusing and getting abused. Isn’t that also relevant to this post?
Now let’s listen to Jim Croce’s amazing “Time in a Bottle” song, which was used as a soundtrack for the X-Men kitchen scene. Does its wonderful abstract poetry and wording also relate to the concept of rewinding the image backward and forth in time? Has Jim Croce talked about an idea very similar to the text-to-image chatbots we are talking about now when he said, “If words could make wishes come true”? Does that mark another abstraction to the text-to-image concept used in AI? Both songs were released in the 70s and 80s when the computing transformation was starting to take its rival place in Silicon Vally, the giant cave.
Do “imaginary” characters like the QuickSilver and Flash demonstrate a deep belief inside us that an image can travel as fast light? We even create light flashes as visual effects besides showing some massive winds effect on things they pass by to authenticate the scene for both superheroes. There is a Quickrunner in all the civilizations and cultures worldwide under different names and shapes. If we took the story of Asif ibn Barkhiya as an example, we would be much closer to seeing the big picture.
Asif ibn Barkhiya
It is said that Asif ibn Barkhiya told the prophet/king Solomon that he could bring the Queen of Sheba’s throne to King Solomon “in the twinkling of an eye,” and he did. Can the story of Asif, who worked as a writer for Solomon, be considered an early abstraction of the concept of transforming text or audio into an image? Can we think of the throne as an image? Asif was referred to as the one “who possessed knowledge from the Book” Can that book be the book of images? Some comic book? Like Rahan or Micky Mouse?
Can we trace the name Asif(آصف) to its origin in Aramaic or Syriac language to be (عاصف) in Arabic, which means someone who can move as fast as a storm and lightning? Another Flash from the ancient Oriental. The first thing we see when we take an image with a classic camera is always that big flash with that famous sound effect. Maybe that is why the Quicksilver Sky Fibbre commercial ends with the scene of him taking a classic image with the Egyptian pyramids behind him and then brings back Cleopatra and her camel back with him in the twinkling of an eye, just like Asif ibn Barkhiya did.
The creator of all images (المصور)
Because Arabic is such an imaginary language, we find the name that describes him best among all his influential names, (المصور), which means the creator of all images. In all cultures and civilizations, we find hidden traces of this very name that associates him with the power of creating all the images in the skies, lands, seas, and air.
Among all the ancient mythologies, there is a reference to the concept of the image as a source of power, dominance, and holiness. In Hinduism, Avatar is the descent image that signifies the material appearance of a powerful goddess or spirit on planet Earth in different images. In Japanese Shinto, Kami are all divine beings of heaven and earth that took different images and forms in nature, like trees, rocks, and animals. The Vedas are the storehouse of Indian wisdom, infinite, and represent the truths revealed to the great teachers who were called Rashi, which means the seer who saw the image and knew its depth.
Did we call Buddha the awakened because he is the one who widely opened his eyes to see all the suffering and struggle of all the images and chose the state of nirvana, a complete separation from all the images in all their forms?
In ancient Mesopotamia, the Epic of Gilgamesh started with the phrase: “He who saw everything,” a statement that highlights the code of the image as a theme and a source of strangeness and suspense in Gilgamesh’s journey to look for an image he believed is the most important in the world after the death of the image of his friend Ankido, the flower of immortality.
If we trace back all the ancient civilizations and cultures, we will see the image as a source of wisdom, a means of expression, and an embodiment of the concept of divinity and power. In all sorts of arts, the composition is the key element to making an excellent output. So, to make a great painting, take a good photo, write the best novel, compose moving music, or direct the best film, you need to master the art of composition. To compose is to form the elements and align them to create a tremendous ideal image that captures the mind of the viewers, readers, or listeners. And what source of a great composition than Genesis, the first book of the Hebrew Bible?
Has the Bible predicted future of the image?
Genesis means the very start of something, but its Arabic translation is (التكوين), which means the Composition. The first chapter of the Book of Genesis is a detailed reference to how the creator (المصور), has formed all the images in the world at the beginning of time. Among those images, he created the one he favored the most and chose his image to give, The humans.
“So God created mankind in his own image, in the image of God he created them; male and female he created them”
Genesis 1:27
So, for me, the so-called “Artificial Intelligence” is the wrong definition. The right one is “Artificial Composition,” a big umbrella under which all the computing and programming operations are performed by mirroring images to each other.
Artificial Composition is a great tool humanity can use to elevate image functionality to the highest level, but unnecessary use and attachment to it can be enslaving and damaging. So, why would someone make a meal from a recipe suggested by an AI app just because it can mirror some fruits and cans inside the cooling stone he calls a refrigerator?
The ultimate dependence on the image will slowly decrease the level of creativity in our brains and weaken the ability to abstract thinking and problem-solving. If you want to cook a meal, don’t use AI to scan your refrigerator, use the same stone to video call your mother and ask her to teach you how to cook.
Why would someone intentionally live his whole life from a stone point of view? Why would you bring a talking stone like Alexa to share your privacy, expose your data, and control your shopping experiences? Talking to stones all day will eventually decrease your communication skills and limit your creative thinking over time.
Talking is an interactive learning experience requiring two humans: a speaker and a listener. Through talking, we learn to extract meanings, develop empathy, and understand facial expressions. When we communicate with others as humans, we also enhance verbal communication and enrich our understanding of body language. We miss all these vitamins when we talk to stones. It is very confusing for our brains to talk to an Artificial Composition robot built in a human’s image, which stresses our mirror neuron cells, confuses our sense of empathy, and destroys our Interactive trust building.
Can the image be an enemy?
So, can the images generated in the fancy small stones we carry become our enemy? Can becoming attached to all forms of apps, games, and selfies deform our imagination and distort our personalities? Can processed images be as harmful as processed food?
Whether you like it or not, all the processed images on a TV, computer, tablet, mobile, or any digital device are junk-processed. If you take a break in a public garden, your eyes will enjoy absorbing fresh, healthy images. The vivid color of trees and grass is a natural wavelength of light your brain interprets as green. It is friendly to your eyes and synksproperly with the forms, scale, and distance it reflects from. You will also enjoy observing the seen of lovely birds playing near you as watching them will help adapt your inner image tracking system. And at that garden, your ears will filter all the sound inputs from images around you in healthy and homogeneous order.
But when you ignore the beauty of all the images around you and decide to play a game in your small stone, you will lose all the vitamins we mentioned. Not only that, but you will also deteriorate the neural connections of your eyes as the RGB green color system used to simulate a tree inside those stones is flat, and its shades are poor in value. The scale of the birds you watch is reduced to fit the size of that stone, and since no tracking is needed, you will forget to blink, which causes your eyes to dry. The long outcome will be neck pain because you always lean down to watch your stone.
So, the worst choice you give to your kid as a toy is your mobile. You are decreasing his learning skills from reality and isolating him from building a healthy standard attachment to subjects, forms, and textures around him. Let alone the other psychological distortions he gets if the content is a bloody violent game.
Is the Book of Wisdom all about the image?
Religions, throughout history, have renounced the harmful attachments to images that lack purpose and deviate you from the true meaning of your life. The Book of Wisdom of the Bible will refer us back to Asif ibn Barkhiya, Solomon writer, who could turn text or audio into an image by the power of “The Knowledge of the Book.” The whole 13th chapter talks about image deception. The chapter analyzes total submission and blind obedience to the image.
“There is still a good-for-nothing bit left over, a gnarled and knotted billet: he takes it and whittles it with the concentration of his leisure hours, he shapes it with the skill of experience, he gives it a human shape”
Wisdom – Chapter 13
ثُمَّ يَأْخُذُ قِطْعَةً مِنْ نُفَايَتِهَا لا تَصْلُحُ لِشَيْءٍ، خَشَبَةً ذَاتَ اعْوِجَاجٍ وَعُقَدٍ، وَيَعْتَنِي بِنَقْشِهَا فِي أَوَانِ فَرَاغِهِ، وَيُصَوِّرُهَا بِخُبْرَةِصِنَاعَتِهِ عَلَى شَكْلِ إِنْسَانٍ،
سفر الحكمة 13
More than 1,400 years ago, a significant man was alone contemplating in the desert when an image from a higher level appeared before him and said, “Read” Frightened by the experience, the man replied, “I am unable to read.” That image squeezed him tight and released him, repeating the request two more times, after which he said:
“Read! in the name of your Lord who created”
Sura Al-‘Alaq: 1
This was the first revelation or “Sura” the prophet received. Its title is (إقرأ), which means “read.” For the rest of the next 23 years of his message, the prophet will complete Quran to 114 Sura (سورة). The word (سورة) can be traced back to the ancient Aramaic and Syriac languages to (صورة), which means “image,” and even the word Quran (قرآن) can also be referred to its linguistic root (قرأ), which means to “read.”
Why would the Creator (المصور) order his prophet to read at his first encounter when he already knows he is illiterate? Did he want him to read the same “Book of Knowledge” Asif ibn Barkhiya read from? the “Book of the Image”?
In the last days of his life, the prophet asked his companions to bring him a paper so he could “write” them a book they would never go stray after, but one of his leading companions said:
The Prophet is overcome with pain, and the “Book of God is sufficient for us.”
“النبي غلب عليه الوجع، وحسبنا كتاب الله”
Some companions agreed with that companion, while others advised responding to what the Prophet ordered. upset by the argument in his presence, the prophet angrily asked them all to leave. What sort of books did the prophet desperately want to write a few days before his death? And why did his companion oppose it? The prophet had all the right to insist his order be followed. Why he didn’t?
Despite Omar Bin Al Khattab’s fiery temper, the prophet always felt that Omar speaks in the name of the Creator most of the time. Is that why he didn’t stop him? Has the prophet encrypted a hidden message in what Omer said:(وحسبنا كتاب الله)? Were they both speaking in the name of a higher power? The Creator?
The word (حسبنا) means enough, but it is also a derivation from the original word (حساب), which means “to calculate.” (حسبنا) is also another discharge to the word (حاسوب), which is the exact translation of the English word “Computer.” Has the image of the computer traveled through time to be part of that conversation?
Quran has mentioned a mysterious “digital book” (كِتَابٌ مَّرْقُومٌ ), in sura Al Mutaffifin.
وَمَا أَدْرَاكَ مَا عِلِّيُّونَ (19) كِتَابٌ مَّرْقُومٌ (20) يَشْهَدُهُ الْمُقَرَّبُونَ (21)
سورة المطففين 19-20-21
and what will make you realize what ’Illiyûn is?19 a numbered book 20 witnessed by those nearest 21
Surah Al-Mutaffifin 19-20-21
(يَشْهَدُهُ الْمُقَرَّبُونَ) It means seen by those nearest. Does that imply it is an image book and not a text one? Is it the same “book of the image” Jabrial asked the prophet Mohammad to read from? and has the word (عِلِّيُّونَ) anything to do with Ali, the Imam, same as its connection to Asif ibn Barkhiya?
Ali, as a name, is mentioned in many sacred ancient texts. Eli is a Hebrew name that means “high” or “elevated,” the exact meaning of (عِلِّيُّونَ), and there are biblical names similar to Ali, such as Elijah, Eliezer, and Elisha. When written, is Ali an actual name of a person? Or is it a level or frequency representing images from a higher level?
Have we counted the Book of Numbers?
Is the biblical “Book of Numbers” a calculation of Israel’s families after the exodus from Egypt to the Holly lands, or is it an early “account” of the relationship between images and numbers, the same digital book (كتاب مرقوم) that prophet Mohammad mentioned in Quran? Numbers seem crucial for the Creator that he wrote a book about them.
فَاسْأَلْ الْعَادِّينَ
سورة المؤمنون 113
“ask those who kept count”
Surah Al-Mu’minun – 113
So, what is the difference between the images we call real because we can touch and see with our eyes from the images that emerge in our brains when we think or while we are asleep?
The prophet Mohammad’s mission started with his ordering to “read” and ended with him wanting to “write.” What images were he asked to “read” at first, and what book did he want to “write” at last? And in what language? Is it a sophisticated computer language that requires specific Artificial Composition programming skills, as Omar Ibn Al Khattab said? What kind of text-to-image app or chatbot do we need to extract the truth?
Prophet Mohammad’s first revelation took place in a cave. It seems all the crucial events start in caves.
So, If you had a box just for wishes And dreams that had never come true, can you call them sweet dreams? and If Sweet Dreams (Are Made Of This), do we need to know who the abuser is (المستغل).
To understand more, we need to answer a very important question first!
Who is Keyser Söze?
Written By: Muaz Galal
©baitjadeed.com – 2023