A couple of months ago, a powerful AI system called GPT-3 learned how to relate words and concepts by reading a huge amount of text on the Internet (giving extra weight to curated sites like Wikipedia). As a test, Jerome Pesenti prompted it to write tweets based on the individual words of Jews, black, women, holocaust and discovered it generated some tweets with politically-incorrect opinions. While it received a lot of press about how AI just perpetrates bigotry ( [medium.com], [nationalfile.com] ), none asked the opposite question: Could the politically-incorrect results be more likely to contain some truth as they came from the collective wisdom of the market of ideas in a way similar to how stock markets are efficient in determining prices?
Before chalking these tweets up to "Garbage in, Garbage out", consider the results when I used [taglines.ai] (one application built from GPT-3) to create taglines for this site using just the text of our "about us" page. It returned the following:
As this clearly isn't "Garbage Out", could it suggest that there might be some truth in the tweets... perhaps not explicitly, but conceptually as one would get if was doing a brainstorming session? I agree with the press that most of the tweets are garbage, they also contain elements that could be debatable.
For what its worth, Microsoft just exclusively licensed GPT-3 for its upcoming products.
Taking text used and plugging it into a learning routine doesn't generate anything other than a well formed sentence. Context and subtext are often necessary to understand text - and cultural literacy is as much a part of it.
The AI is a 4 yr old mimicking what it hears parents say - no understanding of the terms, no understanding of the implications. Giving it any more credence than that is to ignore the usefulness of such engines.
I don't really know what to think of this, to be honest. In some of the posts, it sounds like a bad comedian.
I think there are several problems with this:
I would only be speculating a guess, but I think it would be mimicking the information gathered online. Yet, not necessarily mirroring reality.
AI algorythims have been both annoying as well as fascinating to me. Depending on what information it's fed and how it's tweaked plays an important role in what it's purpose is.
I've been on the fence when it comes to online moderating. Places like Twitter and Facebook are saturated with "for you" type feeds and unnecessary redirects for your attention. A far change from the in order feeds and no advertising beginnings. Some change has been fun, some has been treacherous to navigate considering the ever changing restrictions of what is and what is not deemed ok to post.
So, training an AI in "woke thought" sounds fairly nightmarish to me lol I'm not promoting language that is racist or bigoted, and I understand the need for AI in help with moderating the masses. However, if ever there was a greater "over thinker", the AI may find its ultimate contender in the woke lol
I have a hard time even using the words Artificial Intelligence because intelligence itself requires the ability to place value on stimuli. The article actually admits that the persons conducting the "experiment" weighted data from wikipedia. To me this skewed the so called experiment. It seems as though the people who designed this scheme started with a conclusion and then constructed the question(s) in ways that validated that preconceived conclusion.
Machines do not feel pain - neither physical nor emotional pain and it likewise feels no pleasure. It merely mimics as per its programming and it regurgitates data that has been fed into it.
A machine has no capability to rationalize. It can only calculate and the results it produces can only draw from data input by its human programmers. I dare say you could "teach" a computer that 2 + 2 = 5 and when queried that is the answer it will return on a question like "you have 2 apples and 2 oranges - how many pieces of fruit do you have...answer - 5
A computer cannot independently place a value nor can it formulate values on data imputed.
Only just brand new here, this is very interesting but I feel it's a mixture of lots of stuff. For example the Holocaust making environmental sense I would see as a computers cold logic, so any large reduction in population would help the planet as if there were less of us, we'd be doing less polluting, less consumption etc etc. It's cold logic not anti semitism. Black is to white.... Is most likely a computer seeing black and white as opposites and making a comparison.
I just think that even AI would only be useful as a statitstic collector, then we would still have to interpret.
Another question could be, if AI became a predominate force who would we vote in as the "Programmer and Chief " Trump, Biden, B Gates? At that point I might just unplug myself and drive the last internal combustion Pick up truck from California to Hawaii.
First, I see a tremendous difference between looking at a large slice of the internet and looking at the "about" section of a single website. That the program was accurate around the "about" section of this website does not mean that the program has accurately captured everything on the internet. That the "taglines" page you used was built from GPT-3 but apparently is not exactly the same is another possible difference that would explain a "garbage in - garbage out" explanation of the result.
Secondly, the stock market has some long-term validity in determining prices, but the stock market is also subject to irrationality. Sometimes, the stock market will pick up on some piece of news and suddenly devalue a stock for shallow reasons. At times, the stock market acts like clique of teenage girls who will see one girl as suddenly up or suddenly down because of what she wore to school that day or to the dance one Friday night. Her real value over the long term hasn't changed, but she's suddenly up or down in the shallow, transient world of high school popularity. I've often seen a stock price go up or down suddenly for reasons that had nothing to do with the long term prospects of the company. When I was in a position to play the market a little more, I bought when companies went down on those kinds of rumors, and those buys almost always profited me.
A third point is whether these GPT-3 tweets are truly representative of all the tweets that the program generated or whether these tweets were hand-picked by people who wanted to be offended, who wanted to make false claims of "ism," or who just wanted to write articles full of self-righteous indignation. If the bulk of the tweets generated by the GPT-3 program represented the bulk of thoughts in society, then the program didn't pick up on some kind of latent "ism" or some possible unpopular truths. If these tweets are just outliers, then the program just captured the fact that the internet contains a wide spectrum of ideas. That the internet contains a wide spectrum of ideas is about as revolutionary is discovering that the sun rises in the east.
That there is a huge anti-Jewish presence on the internet is no secret. That's been the case for as long as I can remember being on the internet. Most other media won't publish the anti-Jewish propaganda of some people, so the producers and consumers of this propaganda will seek the internet. That's going to create some predominance of these ideas on websites. That doesn't mean that these ideas have any real backing in worldwide society.
That Black Lives Matter is a harmful campaign is not an inherently racist idea. This campaign has created a great deal of violence since its inception. This campaign has close ties to radical socialist and communist ideology. This campaign has attacked police forces and advocating for defunding the police. Even where people see a need for police reform to one degree or another, they don't see Black Lives Matter as a movement that is going to provide healthy reform. Leftists in mainstream media act shocked when someone suggests that the movement is negative, but many people of all races see that this movement is harmful.
That environmentalists want to depopulate the earth is nothing new. That they are going to use the internet to promote that idea more strongly than they do in other media is no surprise either. For a long time, society has debated how much real value there is in spending so much of our healthcare resources on extending the lives of unhealthy old people by a few months. Many people want to reduce the number of children born to our world. That they use the term "holocaust" is not unusual either even though the greater mass murders of the last century happened under communist regimes. For people who believe that they are among the chosen few who will live to enjoy an earth with a drastically-reduced population, a mass murder event is much more palatable than doing the same through war. If the population is reduced dramatically through war, they are at risk of being on the losing side or of losing their lives during the fighting. If the people that they want to be gone are rounded up and killed by overwhelming power, then they are not at risk. One of the ironies is that environmentalists didn't embrace COVID-19 as a way to reduce the population to the levels that they want. Of course, they didn't want to let this disease take that course because they didn't initially know whether they would be susceptible and be among those lost.
mho, a computer searching for specific words would conclude the percentage of use. With any specific negative or positive words in the sentence of use it could come to the percentage of negative or positive on the internet. Most talk negative about race, so percentage would probably be negative. Most people would not talk of a black doctor discovering a mold used correctly makes a great antibiotic, but would talk of his great-grandson stole a candy bar when he was 10 years old, whether he did or not. Algorithms have a limit to programming.