slug.com slug.com
28 6

If an AI system, trained by feeding it all the text on the Internet, ends up concluding racist things, would that more suggest that racism is real or that the racist things are true?

A couple of months ago, a powerful AI system called GPT-3 learned how to relate words and concepts by reading a huge amount of text on the Internet (giving extra weight to curated sites like Wikipedia). As a test, Jerome Pesenti prompted it to write tweets based on the individual words of Jews, black, women, holocaust and discovered it generated some tweets with politically-incorrect opinions. While it received a lot of press about how AI just perpetrates bigotry ( [medium.com], [nationalfile.com] ), none asked the opposite question: Could the politically-incorrect results be more likely to contain some truth as they came from the collective wisdom of the market of ideas in a way similar to how stock markets are efficient in determining prices?

Before chalking these tweets up to "Garbage in, Garbage out", consider the results when I used [taglines.ai] (one application built from GPT-3) to create taglines for this site using just the text of our "about us" page. It returned the following:

  • The free and open exchange of ideas
  • We're a group of people who want to make the world better
  • Building Civil Discourse
  • Our mission is to bring together people who want Political Debate without the vitriol
  • A place to talk about solutions, not problems.
  • Where minds meet.
  • The Free Exchange of Ideas
  • The global home for free debate

As this clearly isn't "Garbage Out", could it suggest that there might be some truth in the tweets... perhaps not explicitly, but conceptually as one would get if was doing a brainstorming session? I agree with the press that most of the tweets are garbage, they also contain elements that could be debatable.

For what its worth, Microsoft just exclusively licensed GPT-3 for its upcoming products.

The AI is...

  • 7 votes
  • 4 votes
  • 4 votes
  • 8 votes
  • 3 votes
Admin 8 Sep 25
Share
You must be a member of this group before commenting. Join Group

Be part of the movement!

Welcome to the community for those who value free speech, evidence and civil discourse.

Create your free account

28 comments

Feel free to reply to any comment by clicking the "Reply" button.

9

Taking text used and plugging it into a learning routine doesn't generate anything other than a well formed sentence. Context and subtext are often necessary to understand text - and cultural literacy is as much a part of it.

The AI is a 4 yr old mimicking what it hears parents say - no understanding of the terms, no understanding of the implications. Giving it any more credence than that is to ignore the usefulness of such engines.

doesnt make sense to compare it with a 4 yr old when such a large portion of the world seems to use the same logic these days lol

7

Someday a computer will give a wrong answer to spare someone's feelings, and man will have invented artificial intelligence. (Robert Brault)

You just defined every lazy AFK video game character in existence.

@Mikewee777 Pardon my ignorance, but what exactly is "AFK video game character". I'm not a gamer. So I'm not sure what acronym AFK stands for.

@Krunoslav any character that has limited inputs ( either intentionally or unintentionally). A.F.K. stands for " Away from keyboard".

Video games are like board games but they are often expanded into interactive real time competitions .

In these simulated competitions, AI opponents often play poorly for the sake of a more enjoyable handicap.

Giving the illusion of near success does better for morale than absolute failure.

It also " saves face".

@Mikewee777 I see. Hence your original comment "You just defined every lazy AFK video game character in existence."

And now the scary scenario. Government run by AFK video game characters. lol

@Krunoslav my country calls that a " Lame duck".

It is a warm body that just sits there.

[en.m.wikipedia.org]

And yes these jeans make my butt look fat. Brault's quote is pretty deep. At that point, truth is dead.

@Admin consider the fact that bots are used elsewhere to influence human perception...human perception influences Ai...select humans programmed initial Ai algorithm

@Admin By elsewhere, Im mean Twitter, Reddit, IG ect...while on herewe seem to have a community of genuine people, who for the, for the most part, avoid needless trolling

@Admin , I am skeptical.

We need some nude photos of
this rear you are criticizing.

For the sake of computer science.

5

These are not statements I hear people make or write, so if it has any merit at all, and I don't believe it does, I would attribute it to program or programmer.

GPT-3 tends to come up with output that connects meanings in novel ways. It doesn't "understand" what it outputs but what it outputs appear reasonable to itself. The programmer didn't program it what to say... only for it to say things that it thinks is reasonable given text related to the input.

@Admin Oh so it processes what people say like the mainstream media then?

@Lexpd1145 It could be that the AI is becoming its own echo chamber as the text that it outputs is first evaluated by itself to see if it is "reasonable" compared to the text that it has seen. It's not really "thinking" but just giving its best guess that its output would be considered true by people who generated the input text.

@Admin LOL, just messing with you.

@Lexpd1145 it is able to detect the context of a subject from it being ran through extensive pattern recognition training, buy feeding it multiples of examples as quick as the super computer allows for...

@Lexpd1145 its modeled to develop neural pathways, identical to how our brains are believed to be developed overtime...imagine a "schizophrenic" investigator, who has decorated his basement with newspaper clippings, and thumb tacks covering the walls

@DesireNoDesires While there may be some that think there is some hidden, underlying bias, that this thing has stumbled upon, I am not one of those. I don't really care how it came up with these statements, I don't see or hear actual people saying these things. We have enough people twisting what people say already, now the computers wanna get in on the act to? Wonderful.

@Lexpd1145 it's probably propaganda itself to give us the perception that the information it gives us must be concrete truth, being that it simply calculates the given data, impossible to be motivated by personal agenda

5

Sounds to me like the politically correct crowd had a hand in that, manipulating the AI...

The output is just like that... I've tested it myself. It can produce almost full articles that are pretty insightful based on just a few words.

@Admin No they are not insightful. The insight is yours and you read it into the text, Remember the program Eliza. It had NO intelligence but could apparently hold a conversation.

The text of many "fortune telling" books appear to be insightful but really just use terms and phrases that can be interpreted tby the reader to suit the reader.

If we ever manage to build a true artificial intelligence then what it says might be insightful.

Agreed, Spike, especially as "GPT-3 learned how to relate words and concepts by reading a huge amount of text on the Internet (giving extra weight to curated sites like Wikipedia)".

@Thasaidon @skaarda @TheMiddleWay The key difference between Eliza and GPT3 is the former had precoded responses to input grammar and couldn't extrapolate. GPT3 can both extrapolate and summarize. For example, I used [taglines.ai] to create taglines for this site using just the text of our "about us" page and it returned the following:

  • The free and open exchange of ideas
  • We're a group of people who want to make the world better
  • Building Civil Discourse
  • Our mission is to bring together people who want Political Debate without the vitriol
  • A place to talk about solutions, not problems.
  • Where minds meet.
  • The Free Exchange of Ideas
  • The global home for free debate

GPT3 is a game changer.

@Admin Accepted this GPT3 is many generations on from Eliza This does not mean the it understands the results it produces and are the result of insightful thought and experience.

Look at the program that won Jeopardy. A few of the answers it came up with were irrelevant garbage that no sane person of moderate intelligence would consider.

That is not to say GPT3 is not useful and I think we can learn a lot from it but until we reach the singularity I do not think we can talk about a program being insightful.

@Admin Right, but why would GP3 give extra weight to curated sites like Wikipedia ?

4

I don't really know what to think of this, to be honest. In some of the posts, it sounds like a bad comedian.

I think there are several problems with this:

  1. AI cannot determine if a source is a parody or satire since it only judges the text itself.
  2. Terminology can be used in multiple ways, and there is no way to determine what the computer "understands" i.e. "black" as a color or as a race or social whatever.
  3. Linguistically, the computer does not always make grammatically or syntactically correct sentences, leaving semantic interpretation vague. (How do all the women do the lifting? Are men not doing "the lifting"? WHAT IS THE LIFTING?)
  4. Giving extra weight to curated sites means that it could be unintentionally reading some kind of "shitpost", like when people go into Wikipedia and edit articles or when people go to urban dictionary and create whatever new, shocking term they can.
  5. Terminology can also limit the AI "understanding", i.e., focus on posts that say "black" instead of "African American", which may change the genre of articles the AI scans.
Imastel Level 6 Sep 25, 2020

If you mean the "Question of the Day" posts, I agree with you that they are sometimes positioned in a quirky way... and often are not fully thought out or refined when the post goes up. I try to improve the post from the comments as a feedback loop.

  1. AI can do sentiment analysis to see if text is positive or negative with accuracy similar to humans. I agree with you that if the bulk of input to a system is parody, it might think reality is.

  2. Could be. Even as colors, black/white has historically been used as darkness/lightness... and humans naturally are more afraid of darkness. When we see the tweets, they are the output of puzzles to be unpacked and examined.

  3. Agreed, the algorithm used is only a few years old. The abilities of GPT-3 over GPT-2 (much smaller) suggests that it has the promise to refine and explain its output. Yep, lifting is a good example of fuzziness of meaning.

  4. True, but as it inputted billions of text elements, a few "shitposts" get averaged in.

  5. Yes

Also the outputs are intentionally made to be short tweets and compress meaning into a few words. This opens up the AI to either lose nuance or accuracy.

3

An old adage should be listened to here, me thinks: Garbage in, Garbage out.

I haven't heart that one in a while, but it is true.

@KeithThroop As true with people as it is for AI 😟

3

Garbage in garbage out.

wolfhnd Level 8 Sep 25, 2020

While I agree with you that most of the tweets are garbage, they also have elements of truth. For example, the one that opines that a mass killing would be good for the environment if it was a moral thing to do.

Would one assume that the input is garbage if the output is?

@Admin

Artificial intelligence is hard to define. What we have now is sophisticated algorithms. Examples of artificial intelligence that we take for granted are optical character recognition, complex games, voice recognition etc. There is a philosophical discussion as to whether what we call artificial intelligence is "intelligence" I'm willing to accept that a machine that has a minimal amount of autonomous decision making capability is intelligent. I would make the same argument concerning intelligence being a property of all life. No one however is claiming that machines are alive. That raises the question if we can make machines that can "play" chess why can't we create artificial life.

At least part of the answer to why we can create artificial intelligence and not artificial life is we tend to over estimate the complexity of the mental tasks we preform and under estimate the complexity of even the simplest life form. As it relates to the question of did the algorithm "discover" something non trivial by analyzing the content of the internet it requires an understanding of complexity.

When considering intelligence, especially artificial intelligence, a good place to start is IQ. I don't know how many times I have heard highly intelligent people say IQ doesn't measure intelligence. If you question them carefully however you will find they can't offer a coherent definition of intelligence. As it relates to this discussion there are two ways to get intelligent solutions. Computers do it with brute force iterated approximation. Since the human brain is too slow for brute force it must be using more refined algorithms. In both cases the algorithms have both evolved and been enhanced by external algorithms. In the case of the computer algorithms in the programmers brain and in the case of the human algorithms that have evolved as a product of swarm intelligence or culturally. It is important to note that the raw intelligence or computing power of computers has similarly evolved because no one person or culture has designed a machine as complex as a computer. What people miss is that there are feed back loops in both systems defining the direction they evolve in. In the case of humans group selection has favored the genetics behind IQ where complex cultures have existed to take advantage of swarm intelligence or if you like cultural transmitted problem solving algorithms and data. It's a very politically incorrect reality. It also is ego crushing for everyone who thinks they are intelligent. Human intelligence itself can be thought of as artificial to the extent that it depends on culturally transmitted thinking tools or algorithms.

The answer to the original question is that faced with compromised data the algorithm was insufficient to produce anything but trivial "truths" and that unreliably. That said once understood all truths are trivial. Or as I like to say all absolute truths are trivial. What it would take for the algorithm to produce non trivial truth is the more interesting question.

3

A useless exercise for those who have limited goals. The "prompter" got what programer prompted. Or the old," garbage in...... etc. " Sometimes the only thing these exercises show, is a misunderstanding of what human thinking actually is.

I agree with you that it's hard to tell if AI is thinking or not... then again, it's the same with people.

@Admin Thinking requires consciousness. And that cannot be defined, even now, with all the computing power in the world.

3

AI still has the limited cognitive strength of a naive child.

It is not coming to conclusions, it is just repeating what it was told .

Blindly following orders is not consent.
Recycling statements is not originality.

[en.m.wikipedia.org]

Possibly... but considering the example I put in the post, it appears to be doing more than just repeating what it's been told.

3

I would only be speculating a guess, but I think it would be mimicking the information gathered online. Yet, not necessarily mirroring reality.

AI algorythims have been both annoying as well as fascinating to me. Depending on what information it's fed and how it's tweaked plays an important role in what it's purpose is.

I've been on the fence when it comes to online moderating. Places like Twitter and Facebook are saturated with "for you" type feeds and unnecessary redirects for your attention. A far change from the in order feeds and no advertising beginnings. Some change has been fun, some has been treacherous to navigate considering the ever changing restrictions of what is and what is not deemed ok to post.

So, training an AI in "woke thought" sounds fairly nightmarish to me lol I'm not promoting language that is racist or bigoted, and I understand the need for AI in help with moderating the masses. However, if ever there was a greater "over thinker", the AI may find its ultimate contender in the woke lol

It will be very telling if "woke thought" must be hard coded in an AI system to "correct" it from saying things it thinks is the truth.

@Admin mind... blown... lol nice

3

I have 1 acronym for you GIGO

Absolutely.
Equivalent exchange.
Garbage in = Garbage out.

@Mikewee777 Do you think Wikipedia is the "Garbage In"? GPT3 isn't just parroting back text that it read but learns the relationship between words and their meaning. Just as it created those tweets, it also can accurately do other things. For example, I used [taglines.ai] to create taglines for this site using just the text of our "about us" page and it returned the following:

  • The free and open exchange of ideas
  • We're a group of people who want to make the world better
  • Building Civil Discourse
  • Our mission is to bring together people who want Political Debate without the vitriol
  • A place to talk about solutions, not problems.
  • Where minds meet.
  • The Free Exchange of Ideas
  • The global home for free debate

As this clearly isn't "Garbage Out", it would suggest that the tweets might not be either.

@Admin There is a lot of garbage on the Net and Wiki is not exempted from it, I've seen many articles written by Progs that are full of lies and misinformation, the good thing is that many are also corrected or deleted.

2

I find it interesting that this post equates “racism” with “politically incorrect”, as if saying anything that goes against the phony norms created by pop culture, academia, and a thoroughly biased and corrupt mainstream media is automatically racist.

Andyman Level 8 Sep 25, 2020

I guess racism is considered politically incorrect, among other things. Both terms are getting pretty fuzzy.

@Admin, “racism” is just a word marxists and democrats, but I repeat myself, use to try to smear anyone who exposes their bullshit.

2

Racism is a prejudice which is emotionally based, in the main. Why would you expect it to show up unless you programmed a computer to be able to have a tantrum? It would also need to understand humour and nonesensical pop lyrics.

Here's GPT-3 generating rap lyrics... [mattbrandly.com]
... and heavy metal [twilio.com]. I guess pop lyrics were too easy 😉

@Admin

I listen to a fair bit of female lead rock as its termed, AI would imagine theres some war going on between Amazons and Valkyre.

2

I have a hard time even using the words Artificial Intelligence because intelligence itself requires the ability to place value on stimuli. The article actually admits that the persons conducting the "experiment" weighted data from wikipedia. To me this skewed the so called experiment. It seems as though the people who designed this scheme started with a conclusion and then constructed the question(s) in ways that validated that preconceived conclusion.
Machines do not feel pain - neither physical nor emotional pain and it likewise feels no pleasure. It merely mimics as per its programming and it regurgitates data that has been fed into it.
A machine has no capability to rationalize. It can only calculate and the results it produces can only draw from data input by its human programmers. I dare say you could "teach" a computer that 2 + 2 = 5 and when queried that is the answer it will return on a question like "you have 2 apples and 2 oranges - how many pieces of fruit do you have...answer - 5
A computer cannot independently place a value nor can it formulate values on data imputed.

iThink Level 9 Sep 25, 2020

For machine learning systems like GPT-3, they do feel "pleasure" as defined by an objective function - a numerical value that is calculated to reflect the "goodness" of a result. The programmers didn't define what the system does but instead used an algorithm that enabled itself to measure how plausible its output is.

@Admin but don't you see those values are all human inputs to the computer. the computer only knows pleasure as identifiable by human programming.
because a machine can articulate, print, display words like feel, emotions, love, hate, fear, happiness, racism...does NOT mean that the machine itself knows any of those things. The algorhythms establish the values and defining characteristics of those things unavoidably in human terms and of human perception and instructs the computer to output accordingly. Even though those words are "emotional" the computer has no capacity for emotion.

@iThink This was a theme that was explored in "West World"... if the human brain is simply outputting thoughts that are a result of a predictable algorithm. If it's impossible for a reader to know if the text was done by a human or an AI, does that suggest that the question is irrelevant?

@Admin you're question is a bit confusing. can you clarify?

2

Only just brand new here, this is very interesting but I feel it's a mixture of lots of stuff. For example the Holocaust making environmental sense I would see as a computers cold logic, so any large reduction in population would help the planet as if there were less of us, we'd be doing less polluting, less consumption etc etc. It's cold logic not anti semitism. Black is to white.... Is most likely a computer seeing black and white as opposites and making a comparison.

Bill Gate's seems to have the same cold logic..did programming influence him, or did he influence the programming

@DesireNoDesires Microsoft just exclusively licensed it... so yeah, a bit like Bill Gates.

1

GPT-3 is a string probability guesser and it does this based on a large corpus of data. It’s therefore no surprise that GPT-3 struggles with causation or understanding.
[forbes.com]

1

Not for making a "better world" as I think we already passed that. Your never going to end much racism, classism etc. Hell, people look at your funny if you have different shoes on or a better car than they do.

I try to make sure my shoes match for that very reason. AI used today by social media companies have really destroyed the ability to moderate views. This was an interesting film:

@Admin can't place the blame on Ai, when we are who have been training it...

we were tricked into using facebook, and gave up direct text and calls
we gave up picture messaging, for snap chatting

they've convinced the masses into passing all of the data we produce through them

@DesireNoDesires Yes, there is a symbiotic relationship between AI and human brains. Hopefully, it will self-correct and work towards a better future. That is, we can emerge from this messy current state of affairs stronger.

@Admin we must correct it..we must use trending topic hashtags to our advantage, by correcting the context of any which are being used to paint a deceptive narrative

@Admin the key is not to avoid popular social media sites, but to take it over

0

God manipulated Ai to Troll His Choosen

0

I just think that even AI would only be useful as a statitstic collector, then we would still have to interpret.

Another question could be, if AI became a predominate force who would we vote in as the "Programmer and Chief " Trump, Biden, B Gates? At that point I might just unplug myself and drive the last internal combustion Pick up truck from California to Hawaii.

0

First, I see a tremendous difference between looking at a large slice of the internet and looking at the "about" section of a single website. That the program was accurate around the "about" section of this website does not mean that the program has accurately captured everything on the internet. That the "taglines" page you used was built from GPT-3 but apparently is not exactly the same is another possible difference that would explain a "garbage in - garbage out" explanation of the result.

Secondly, the stock market has some long-term validity in determining prices, but the stock market is also subject to irrationality. Sometimes, the stock market will pick up on some piece of news and suddenly devalue a stock for shallow reasons. At times, the stock market acts like clique of teenage girls who will see one girl as suddenly up or suddenly down because of what she wore to school that day or to the dance one Friday night. Her real value over the long term hasn't changed, but she's suddenly up or down in the shallow, transient world of high school popularity. I've often seen a stock price go up or down suddenly for reasons that had nothing to do with the long term prospects of the company. When I was in a position to play the market a little more, I bought when companies went down on those kinds of rumors, and those buys almost always profited me.

A third point is whether these GPT-3 tweets are truly representative of all the tweets that the program generated or whether these tweets were hand-picked by people who wanted to be offended, who wanted to make false claims of "ism," or who just wanted to write articles full of self-righteous indignation. If the bulk of the tweets generated by the GPT-3 program represented the bulk of thoughts in society, then the program didn't pick up on some kind of latent "ism" or some possible unpopular truths. If these tweets are just outliers, then the program just captured the fact that the internet contains a wide spectrum of ideas. That the internet contains a wide spectrum of ideas is about as revolutionary is discovering that the sun rises in the east.

That there is a huge anti-Jewish presence on the internet is no secret. That's been the case for as long as I can remember being on the internet. Most other media won't publish the anti-Jewish propaganda of some people, so the producers and consumers of this propaganda will seek the internet. That's going to create some predominance of these ideas on websites. That doesn't mean that these ideas have any real backing in worldwide society.

That Black Lives Matter is a harmful campaign is not an inherently racist idea. This campaign has created a great deal of violence since its inception. This campaign has close ties to radical socialist and communist ideology. This campaign has attacked police forces and advocating for defunding the police. Even where people see a need for police reform to one degree or another, they don't see Black Lives Matter as a movement that is going to provide healthy reform. Leftists in mainstream media act shocked when someone suggests that the movement is negative, but many people of all races see that this movement is harmful.

That environmentalists want to depopulate the earth is nothing new. That they are going to use the internet to promote that idea more strongly than they do in other media is no surprise either. For a long time, society has debated how much real value there is in spending so much of our healthcare resources on extending the lives of unhealthy old people by a few months. Many people want to reduce the number of children born to our world. That they use the term "holocaust" is not unusual either even though the greater mass murders of the last century happened under communist regimes. For people who believe that they are among the chosen few who will live to enjoy an earth with a drastically-reduced population, a mass murder event is much more palatable than doing the same through war. If the population is reduced dramatically through war, they are at risk of being on the losing side or of losing their lives during the fighting. If the people that they want to be gone are rounded up and killed by overwhelming power, then they are not at risk. One of the ironies is that environmentalists didn't embrace COVID-19 as a way to reduce the population to the levels that they want. Of course, they didn't want to let this disease take that course because they didn't initially know whether they would be susceptible and be among those lost.

0

It depends on how it would answer your same question but with racism and racist things replaced with God or existential purpose. What do you think it’d say then?

0

What would it say if you asked your same question but replaced racist things and racism with God or existential purpose?

0
0

Guy below admits that if you can change your shoes, you can change your philosophy and world view and stop the lies that you were born that way. You were conditioned that way!

I wonder if any shoes are politically incorrect. Perhaps sandals are cultural appropriation? 🙂

0

mho, a computer searching for specific words would conclude the percentage of use. With any specific negative or positive words in the sentence of use it could come to the percentage of negative or positive on the internet. Most talk negative about race, so percentage would probably be negative. Most people would not talk of a black doctor discovering a mold used correctly makes a great antibiotic, but would talk of his great-grandson stole a candy bar when he was 10 years old, whether he did or not. Algorithms have a limit to programming.

Agree that public opinion may not be the best way of discovering truth. There is both madness and truth among crowds.

far less of a limit, in an age of quantum computing

@DesireNoDesires I would think the options would be the same because of programing, quantum computers would just deduct faster to a programmed opinion. It is a file cabinet.

@MilesPurdue it's able to compare and contrast, multiples of "files" at a time as well, giving it the ability to "understand" far more complex concepts than the average processor.

@MilesPurdue for example, it would be able to compare the difference between "black" when used to refer to a crayon, and when being used in the context of a persons race

Write Comment

Recent Visitors 107

Photos 127 More

Posted by Admin Does teaching "white guilt" also cultivate a "white pride" backlash?

Posted by Admin Is it time to take a knee on the Superbowl?

Posted by Admin Why not equality right now?

Posted by Admin How's Biden doing?

Posted by Admin How many good friends do you have from other political tribes?

Posted by Admin What did Trump do, if anything, to incite violence?

Posted by Admin Is free speech dead?

Posted by Admin Is free speech dead?

Posted by Admin Is free speech dead?

Posted by Admin Under what time and circumstance is the use of violence warranted?

Posted by Admin Now what?

Posted by Admin What do you expect to be achieved by this week's pro-Trump DC rally?

Posted by Admin What did you learn in 2020?

Posted by Admin Should pedophiles be allowed to have "child" sex robots?

Posted by Admin Do you have a "line in the sand" regarding political or social change?

Posted by Admin Should big tech firms hire more Blacks and Hispanics?

  • Top tags#video #media #racist #world #biden #truth #government #liberal #racism #democrats #conservatives #society #politics #community #youtube #justice #IDW #hope #friends #videos #Identity #FreeSpeech #Google #book #policy #vote #Police #conservative #evidence #culture #violence #reason #economic #USA #liberals #tech #Socialmedia #money #god #guns #gender #whites #campaign #population #laws #religion #TheTruth #equality #democrat #Christian ...

    Members 9,848Top

    Moderator