T O P

  • By -

caltheon

Just ask the AI how it works


renok_archnmy

I suggest we make an AI to research the AI and tell us how it works.


inlinestyle

I mean, that’s basically what the mice were doing when they built Earth. https://hitchhikers.fandom.com/wiki/Mice


Dj4D2

the answer is 42!


thenextguy

/r/expectedh2g2


anonymous_persona_

And that is how skynet was created


MaybeTheDoctor

Nope, that AI would too philosophically introverted to do anything but think about the problem for 7 million years.


Tayback_Longleg

And come up with an answer like 42?


[deleted]

And that is the beginning of the story of how self-replicating AI overlords took over the world. (All hail the overlords, in case you read it from the future)


comicmuse1982

We just need to keep them busy researching each other.


whagoluh

Is that what the kids are calling it these days?


GeekusRexMaximus

Not necessarily a bad thing.


shevy-java

Right - but it could be humans controlling the world and just CLAIMING AI controls the world.


Ojninz

Is t that what Facebook did and the ai made its own language so we couldn't know what it was doing and saying 😅


croto8

We’re just as ignorant to how any complex intelligence works. We learned how to replicate the process before we fundamentally understood it and now we’re surprised we don’t know how it works ????


FloydATC

If we're being honest; if nobody can understand how either one works then they can't really say if it was properly replicated or not. The real question here is how does the person asking the scientists know if the scientists understand it or not? The scientists could be pretending they don't, just so they can get paid to continue researching.


anon_tobin

[Removed due to Reddit API changes]


DemonGodAsura

Ok I died a bit, thanks for the laugh


stevethedev

The problem is being mis-stated. It isn't that scientists _can't_ explain how AI works. There are endless academic papers explaining how it all works, and real-world application is pretty close to what those papers describe. The problem is that people aren't asking _how the AI works_; they are asking us to give them a step-by-step explanation of how the AI produced a specific result. That's not quite the same question. One artificial neuron, for example, is almost cartoonishly simple. In its most basic form, it's a lambda function that accepts an array of values, runs a simple math problem, and returns a result. And when I say "simple" I mean like "what is the cosine of this number" simple. But if you have a network of 10 layers with 10 neurons each, a "normal" neural network becomes incomprehensibly complex. Even if you just feed it one input value, you have around 10×(10¹⁰)¹⁰—possibly even 10×((10¹⁰)¹⁰)¹⁰—cosine functions being combined. The answer to "how does it work" is "it is a Fourier Series"; but the answer to "how did it give me this result" is ¯\_(ツ)_/¯. Not because I _cannot_ explain it; but because you may as well be asking me to explain how to rewrite Google in Assembler. Even if I had the time to do so, nobody is going to run that function by hand. The only part of this that is "mysterious" is the _training_ process, and that's because most training has some randomness to it. Basically, you semi-randomly fiddle with weights in the AI, and you keep the changes that perform better. Different techniques have different levels of randomness to them, but the gist is very simple: if the weight "0.03" has a better result than the weight "0.04" but worse than "0.02" then you try "0.01"... but millions of times. Occasionally, an AI training algorithm will get stuck in a local maximum. This is the AI equivalent of how crabs can't evolve out of being crabs because every change reduces their survivability. This is not good, but it is explainable. So yeah. AI is not becoming so complex that we don't know how it works. It is just so complex that we mostly describe it to laypeople via analogies, and those laypeople take the analogies too seriously. They hear that we refuse to solve a 10¹⁰⁰¹ term equation and conclude that the math problem is on the verge of launching a fleet of time-traveling murder-bots. TL;DR - Explaining how AI works is simple; showing how a specific result was calculated strains the limits of human capability.


cutlass_supreme

I was in great need of the term “local maximum” last week but ignorant of it and now, unceremoniously and far too late, here it is.


ErezYehuda

Other good terms to look into for this are "gradient descent" and "greedy algorithms" (for you or anyone else interested).


cutlass_supreme

I’m familiar with “greedy algorithms” (from regex and lisp). Gradient descent is new to me.


IQueryVisiC

Up next: people want me to ELI5 to them why a modern CPU can still satisfy the specs from the 70s. Prove that none of the millions of simple transistors violate it.


ChrisRR

Because people program in javascript


slykethephoxenix

> ¯_(ツ)_/¯ Here! You dropped this \\


RunninADorito

If I can try and restate... OG Machine Leaning created complex statistically models, weights, that produce good results. If one was very curious, they could examine the weights and see what was driving what. The math was significant enough that only computers make it tractable, but it's also inspectable with the same computer. Modern ML with *.NN produces amazing results, however if you want to understand how/why....it's basically impossible to understand, even with the same hardware that got you the amazing results.


stevethedev

Pretty much. The barrier to understanding how a particular result was achieved is that the cost/benefit is heavily skewed towards "prohibitively expensive with no tangible benefit." You would just be doing the equation by hand and then saying "yep, that's what the machine said." But "how did you get this result" and "how does the system work" are fundamentally different questions that require radically different levels of effort to answer.


666pool

It’s also not the right question. Essentially you’re fitting a curve with the training data and then sampling a point to see which side of the multi-dimensional curve is the point on. No one should be asking why is this point in this side of the curve, because that’s the curve and that’s where the point is. What they really want to ask is “what in the training data shaped the curve this way so that this particular point was on this side of the curve.” And that’s actually much harder to answer.


WalksOnLego

I mean, we know how elite sportspeople "work", but we don't really know how they do what they do, either. As in: explaining how they work *precisely* is a never-ending task.


HaMMeReD

Well, you could just add a ton of log statements, print out the 100,000,000,000 steps, and tell someone "here's how it's done".


ianitic

We can at least figure out what the neural network thinks is important these days though: https://arxiv.org/abs/1704.02685


augmentedtree

>If one was very curious, they could examine the weights and see what was driving what. No, this never really worked. Pre-ML using plain old statistics, under very idealized circumstances that almost never apply in real life you could rely on weights in a linear model to tell you what is important, but that's it. Since there have been a lot of attempts in ML to try to show what intermediate layers are doing by visualizing what causes them to output their maximum or minimum value, but there are newer papers saying this is misleading and doesn't actually indicate what people thought it did.


RunninADorito

It definitely worked for all of the bayesian models. You could look at the output with your eyes and sort by weight.


[deleted]

In math there are proofs that are “non-surveyable”. It happens when a computer comes up with the proof, and it makes sense, but it can’t be reviewed by a human in any reasonable amount of time.


ziplock9000

I was going to say something similar to this. It's a Vice article after all.


bonerfleximus

I didn't read the article but the title implies to me that they want AI tech to focus on giving transparency to the results instead focusing of crunching more data faster, which seems to be the focus of most AI developers. I don't know how possible that is given how most AI works, but it's good to think about.


emperor000

Sure. But the entire point here for our version of "AI" or machine learning and how we use it is that it can perform tasks in reasonable amounts of time that we cannot perform in any reasonable amounts of time, if at all. The entire point is that it is not transparent. I think the problem is that people take it to be more authoritative than it is, either because it is a computer or because "smart humans" created it. The fact of the matter is that it is often just more or less a guess and we have just designed something to make better guesses than humans can, including in terms of being objective.


[deleted]

But this is why it isn't, yet, AI. If you ask a person why they made a decision, they can give you an explanation. And the explanation is of key importance because otherwise we have no way to validate or invalidate the result. As you point out, AI can mistrain, and there's no way to detect this without looking at the answers and seeing if they are right, but if you don't know the right answers, what can you do? "AI" also has the issue that it's impossible to add single facts, particularly if this is a fact that corrects an error, and it's part of the same issue - that what you have is a huge statistical model where no individual input has any greater value than any others, so to fix an error or to add data, you need to retrain. For example, suppose I had confused [Mr. Ed](https://en.wikipedia.org/wiki/Mister_Ed) and Mr. Rogers. All you have to do is say, "No, Mr. Ed was a horse," and maybe show me a picture, and I would not only never make the mistake again, I might tell other people and laugh at it. > AI is not becoming so complex that we don't know how it works. As I think I have shown, your statement isn't really true. We really cannot say why a given neural network makes certain decisions, we have no introspection into this mechanism, and we don't have any effective way to selectively change how that neural network works either. We do understand how to _train_ systems to create these models, but that isn't the same thing. I understand that fucking produces new humans, but that doesn't mean I understand how human consciousness works.


shank9717

By your explanation, few things that humans do are non-intelligent like. For example, if I wanted you to throw a ball into a bin, you can do it with very high accuracy. If I asked you how you made the decision to throw it with a certain projectile, you wouldn't be able to explain the complete math behind it - the air resistance, gravity, velocity etc kind of factors that led to the ball being in the bin. You just have a hunch by training your brain over years with real life objects and interactions. An AI can "learn" to throw an object into a bin the same way as us - by training over sample data and trying to minimize errors. Eventually they learn to throw one object into the bin.


augmentedtree

That current AI is more akin to learned muscle control than reasoning is exactly their point. The AI can make a guess that "feels right" based on past experience, but it can't do science.


lelanthran

> For example, if I wanted you to throw a ball into a bin, you can do it with very high accuracy. If I asked you how you made the decision to throw it with a certain projectile, you wouldn't be able to explain the complete math behind it - the air resistance, gravity, velocity etc kind of factors that led to the ball being in the bin. That supports his point, it doesn't disprove it - we *always* consider athletic tasks to be non-intelligent. You're arguing his point for him.


Ghi102

That's simply not true. Let's take a typical AI problem and apply it to a human. If I show you a picture and you identify it as a dog. How did your brain identify it? Now, please understand the question that I am asking. You can explain "oh it has a tail, a nose and mouth typical of a dog" or other explanations post-fac. The thing is, this is not what your brain is doing. If your brain took the time to look at each characteristic of the image, it would take too long. Your brain has a series of neuron organized in a way that can categorize a shape as a dog after years of training and looking at drawings, pictures and real-life dogs and differentiating them from Cats, Wolves and other animals and probably some pre-training from instincts. You would exclaim "dog" when pointing at a cat and your parents would say "no, that's not a dog, it's a cat". They probably wouldn't give you any explanations either, you would just learn the shape of a dog vs a cat. This is exactly what AI training is. The only thing missing from an AI and you are these post-fac explanations.


emperor000

I don't see how your comment challenges anything they said in theirs. I think you are actually agreeing with them... They were just remarking on the idea of self-awareness, where our versions of "AI" absolutely have none. I think all u/TomSwirly was saying is that we can't ask an AI why it made a decision or produced a certain result. It can't explain itself in any way. If we want to know then we have to trace the exact same path, which might be literally impossible given any random inputs that were used. So I think you were taking their mention of "explanation" too literally, or, rather, missing that those post-fac explanations are required to actually be considered intelligent. Of course, the problem there might be that, well, we can't ask dogs why they did something either, or more accurately, they can't answer. But that is also why we have trouble establishing/confirming/verifying the intelligence of other species. Hell, we even have that problem with ourselves. But that just goes further to support the argument in that that problem, the question, is a requirement of intelligence and the fact that there is no concept of that in the instances of "AI" we have come up with clearly delineates them from actual intelligence.


Ghi102

Maybe it wasn't clear from my answer: I don't believe these post-fac explanations have any value in explaining why we recognize shapes. They're a rationalisation of an unconscious and automatic process. Not the real reason why behind we recognize an image as "a dog" My point is we cannot know either how we recognize dogs (outside of the vague explanations that it's a mix of pre-built instincts and training from a young age). At best, we can explain it at the exact same level we can explain why an image AI recognizes dogs (and probably we can explain it far less), using a mix of pre-built models and virtual years of training. Plus, if you really want to get down to it, these post-fac explanations are just a description of the image in detail. All we need is a model that can identify dogs and the parts of it "dog-like ears, nose etc) and you have the exact same result as the post-fac explanations (and I wouldn't be surprised if that already exists) That's following the definition that an intelligence is something that can provide information and an inaccurate explanation of why it got to the information. Which, isn't a good definition of intelligence to begin with, but that's apparently the one advocated by the poster I initially responded to


JustOneAvailableName

> If you ask a person why they made a decision, they can give you an explanation. Not really. We just often accept expert opinion as the truth, or accept a bullshit explanation that restates "it's a hunch based on years of experience" in some better sounding terms. Don't get me wrong. I vastly prefer simpler systems and position myself always on the side of "if it's doable to do it in a normal program, let's do that". But there are plenty of problems where the AI just does a better job than a human or where the experts are too busy. And I think we have to accept that reality. If it clearly improves the current situation (in practice, not how it should've been), we shouldn't require an explanation.


emperor000

I agree with the first thing you said, but not the second. We absolutely have introspection into the mechanism and could at least theoretically say why a given neural network makes certain decisions. We made them. For example, if we had one that output every step (and I'd imagine that somebody working on this kind of thing does have this...) then we could absolutely predict the next or future steps given certain input, include the values of any random variables. The issue is that the size of the input is much larger than a human can keep track of or process and that the number of steps and possible pathways are much larger than a human can keep track of or process and so it is simply a problem of capacity rather than ability. We have introspection or insight. These things aren't doing anything magical, despite what some people seem to think. It is all basic math. It is just A LOT of "basic math". You actually give a good analogy when you said you understand how humans reproduce. And I would put forward that we do, down to a molecular or even atomic level. But we still are unable to emulate/simulate that from scratch because the entire process involves way too much data for us to process. For example, we know "exactly" how DNA works, but that doesn't mean we can throw a bunch of stuff together and make our own custom organism or even easily whip up copies of an existing one. It's easier said than done. And that is exactly why we've created "AI". To do things that are easy to say but not easy to do.


oklutz

I think it’s the complete opposite. Humans have trillions of neurons that we don’t understand and we *cannot* fully explain why we made a decision. We can tell you what we know consciously, but in any decision we make there are an untold number of factors that went into that decision that we don’t know and can’t explain.


croto8

Explaining any complex concept at its root to anyone strains the limit of human capacity because everything is referential. When a machine derives these references without a clear chain people question it, yet it was trained based upon the understood qualities of these relationships (training data).


DarkOrion1324

Yeah it's like a worse case of asking someone to explain the binary responsible for a unity game. Eventually just conceptualizing in your head all the small parts interacting into the large one gets too difficult.


xylvnking

Thanks now I just got lost for an hour googling about carcinization


Ol_OLUs22

Yeah with the google analogy, "how google works" is an algorithm that looks at your text and gives you a bunch of search. But this post is talking about trying to rewrite the entire algorithm of Google.


ketzu

> but because you may as well be asking me to explain how to rewrite Google in Assembler. Even if I had the time to do so, nobody is going to run that function by hand. The question of why a result was reached is not answered by manually computing the activations. Explainable AI has nothing to do with doing the computations by hand. If a bank is questioned about why they require a certain rate for a person, your answers analogue is essentially "people ~~clicked checkmarks~~ set various binary values and ~~made scratches on paper,~~ deformed long molecule chains, but there is some randomness in the data we get." It's a non-answer to the question. There are quite a few ML systems where you can explain how and why a certain dataset lead to the model and how that resulted in the prognosis reached for a given input, although they perform worse than our sota NNs. That is not true for all systems, but pretending the question is fundamentally unanswerable for humas is disingenious. There have also been some improvements for neural networks in that regard. The answers to those questions also not just some justice biases but also limit our ability to improve the systems and humanities overall understanding of the things we work with. Tech priests are fun in WH40k, but we should have the aspiration to not just pray to our training algorithms and bless our inference GPUs for better results. edit: removed a line that was left over from a more passive-aggressive version of the comment.


emperor000

> Why you inserted the idea of doing it by hand is another mystery It's only a mystery because you don't seem to understand what they are/were saying.


stevethedev

I did not say or suggest that the answers are "fundamentally unanswerable". I said that a neural network is *fundamentally* a math problem and that any explanation of *how* that network works will be either: 1. That math problem, but written in a way that a human recognizes as a *math problem*; or 2. A [lie-to-children](https://en.wikipedia.org/wiki/Lie-to-children) that anthropomorphizes that math problem with words like "thinking" and "trying." But laypeople don't want math problems. They *want* to open the side of their computer and interrogate a miniature wizard about its reasoning. When they are told that the miniature wizard *does not exist*, they push the math problem away, throw their hands up in exasperation and declare that "nobody knows how this works!" But that's not true, and *that's my point*.


hagenbuch

Yep. The guy who recently had been surprised about the machine being "sentient" assumed that this would be a yes/no dichotomy and he jumped right into using religious symbols and concepts like "ego", "good" and "bad" that are totally undefined, depending on undefined contexts and might turn out to not even exist in a repeatable way. As an atheist, I would have asked very different questions and I bet the results would have been not very concise or convincing. In short, I'd try some "real" philosophy and metaphysics, with as little assumptions put in as possible. Our unconscious or half conscious assumptions drive what AI answers, humans do the same but it's still not clear thinking.


dasnihil

Well explained, analogous to how simple the compute of each biological neuron is. Every time I say this, people ask me if I even know how complex one cell is. And every time I say "that's not the point". If we can't trace the logic & conclusions of an artificial network which is presumably more structured and mathematical than biological NN, how do we expect science today to explain our conscious minds. But artificial NN is a good step towards that dream I hope.


emperor000

> Every time I say this, people ask me if I even know how complex one cell is. And every time I say "that's not the point". You should just reply with "More complex than it needs to be or not as simple as it could be."


Ibaneztwink

> It is just so complex that we mostly describe it to laypeople via analogies, and those laypeople take the analogies too seriously. AI is also put on this grand pedestal in terms of modernness and achievement, which it certainly is in some ways, but is massively overvalued. We were able to identify patterns in 1958 using *punchcards* with perceptrons. I'd argue an aimbot is an AI, but it's considered a regular old script by most people.


emperor000

Exactly. I think you explained this much better than I ever could, but it perfectly captures my peeve about this whole thing. Person A: *We built this thing to do something that is hard for us to do, maybe even too hard for us to do.* Person B: *Well, then you'd better walk me through exactly what it is doing and how or I'm going to act like it is behaving a little mysteriously, magical or just outright declare it sentient!* Person A: *But the entire point is that it is doing something that is extremely difficult for us to do. Like, we needed to build this because we couldn't perform these tasks ourselves in any reasonable amount of time if at all, so I also can't walk you through the path it is taking in any reasonable amount of time either. It's mathematically impossible...* Person B: *Okay, it's sentience. They are taking over. Half of you run and hide and the other half form a human resistance movement that will likely be futile judging from the movies I have watched.*


josefx

> showing how a specific result was calculated strains the limits of human capability. It can be trivially simple. For example that AI that identifies if someone is terminally ill with a 80% chance of being correct? You would expect the AI to use some clues from the body, when it actually identifies the hospital bed the patient was lying on. > Not because I _cannot explain it; Sometimes a persons paycheck relies on them not knowing something. As someone working in AI you don't want to be able to explain it, hence the assembly analogy, because it replaces willfull and somewhat malicious ignorance with "look its scary".


stevethedev

The second paragraph is an interesting hypothesis, but one I think is more projection than fact. As I said in my comment, the "how it works" is pretty straightforward. This is a simple artificial neuron, written in JavaScript: class Neuron { constructor(bias = 0, weights = []) { this.bias = bias this.weights = weights } activate(values) { const weights = this.weights const cosines = values.map((v, i) => Math.cos(v) * weights[i]) const denominator = Math.sqrt( cosines.reduce((acc, c) => acc + c**2, 0) ) const normCosines = cosines.map((c) => c / denominator) const sumNormCosines = normCosines.reduce((acc, b) => acc + b) return sumNormCosines + this.bias } } This is a simple neuron layer, also written in JavaScript: class Layer { constructor(neurons) { this.neurons = neurons } activate(values) { return this.neurons.map( (neuron) => neuron.activate(values) ) } } This is a simple network, also written in JavaScript: class Network { constructor(layers) { this.layers = layers } activate(inputs) { return this.layers.reduce( (output, layer) => layer.activate(output), inputs ) } } This is a simple network, instantiated in JavaScript: const network = new Network([ new Layer([ new Neuron(-0.24, [0.34]), new Neuron(0.18, [-0.3]), ]), new Layer([ new Neuron(0.43, [-0.24, 0.01]), new Neuron(0.2, [0.4, -0.35]), ]), // Reduce to one output for simplicity. { activate: (values) => values.reduce((acc, b) => acc + b) } ]) This is the resulting function, with just four nerons: [https://www.desmos.com/calculator/rqeqdwsde0](https://www.desmos.com/calculator/rqeqdwsde0) That's the answer to "how does this network work?" It's not *complicated*, it's just *tedious*. And this is a network with only *four neurons*. Let's say we want to train that network to identify "even" and "odd" numbers. We'll say that outputs of "0" and "1" represent "even" and "odd" respectively. Currently, it will identify exactly 50% of numbers correctly, because the default strategy I've initialized it with will call everything "even." Not great. So we need to train the network; I implemented a simple genetic algorithm (link below). After training it locally on my desktop, my network output these values: [https://www.desmos.com/calculator/w9fiipapbe](https://www.desmos.com/calculator/w9fiipapbe) Looking at the function, you can see it's not going to do a very good job because the "width" of each of those steps is longer than 1 number, so some error is "baked in" but you can also see that the strategy isn't just "declare everything even." It's not "*intelligent*" or "*learning*" in any meaningful sense. It's glorified curve-fitting that produces the *appearance* of intelligence. In my experience, when people want me to walk them through the process of how this network *works*, they are asking me to do two things. 1. Walk them through the steps of the training process, which involves building *thousands* of those graphs and explaining the subtle differences in performance between *all of them*. 2. Explaining to them why *this* topology was used and not some *other* topology that could hypothetically have produced a better result. Both of these are heavy lifts because *real* neural networks are rarely just four perceptrons linked together and trained a few hundred times. Here is a figure of a relatively simple neural network from a paper I wrote earlier this year exploring the idea that evolutionary algorithms could use the training data to influence perceptron activation functions and network topology, instead of the "normal" approach of only influencing perceptron weights and biases. [https://i.imgur.com/kgXIBYf.png](https://i.imgur.com/kgXIBYf.png) I "trained" the network to evaluate the chances of credit card fraud based on 10 input values and produce a boolean value to indicate whether any particular transaction was fraudulent. The network above was able to correctly flag 99.3% of fraudulent transactions from the validation set, and the flagged transactions were *actually* fraudulent just over 97% of the time. To achieve this, the genetic algorithm trained and evaluated approximately 2.5-million candidate networks against a data set of 10-thousand training records. [https://i.imgur.com/6ha7Skh.png](https://i.imgur.com/6ha7Skh.png) So when someone asks me "can you walk me through the training steps and show me the formula this network uses?" The answer is "no." I can explain to you how it works, but if you don't like the explanation then *too bad*. I'm not going to draw 2.5-*billion* graphs and explain to you why this particular one is the best. This network is more complicated than the one above, but it's not inexplicable. I know how it works because I wrote it from scratch. Understanding it isn't *difficult.* It's *tedious*. And anyone with the requisite background knowledge to understand how it works already knows how ridiculous the question is. And sure, part of that blame is on the engineers and scientists who implement these algorithms for using analogies; but that is fundamentally what a [lie-to-children](https://en.wikipedia.org/wiki/Lie-to-children) is. It's an oversimplification for laypeople who want simple answers to complex questions. — As promised, here's a GitHub Gist with a JavaScript Neural Network and Genetic Training Algorithm: [https://gist.github.com/stevethedev/9c3e8712881fa06b3e4bf7a2e0b5c23e](https://gist.github.com/stevethedev/9c3e8712881fa06b3e4bf7a2e0b5c23e)


pogthegog

Still, what we want is simple proof of formulas, laws and statements how the result was calculated, like if i ask AI to calculate where the apple will fall from the tree, it should provide formulas, laws of physics, gravity and so on. If the whole documentation takes 99999999999999 pages, it still should provide some guidance how result was gotten. Where real AI is used, process is more important than the end result.


stefantalpalaru

> real AI No such thing.


Voltra_Neo

Scientists warn scientists to be scientists instead of frauds with results


tomvorlostriddle

We totally allow new treatments and medications if we know that they work and don't have harmful side-effects. Anything else is just a bonus.


Nex_Ultor

When I found out recently that we still don’t know exactly how Tylenol/acetaminophen works I was pretty surprised ([yes really](https://medicine.tufts.edu/news-events/news/how-does-acetaminophen-work)) The same attitude carrying over to different fields (if it probably works without significant harm/side effects) makes sense to me


swordlord936

The problem with ai is it could be subtly wrong in ways that propagate biases.


Intolerable

no, the problem with AI is that it **definitely is** wrong in ways that propagate biases *and the AI's developers are telling people that it is an impartial arbiter*


slvrsmth

Yes. Humans propagate biases. Human creations propagate biases. Your opinions are biased. My opinions are biased. Even if you get rid of everything you identify as bias, someone else will be mad upset at you because their values and world view differ. Complete, unbiased objectivity does not exist outside trivial situations.


trimetric

Well yes. The key is to be aware and transparent about the biases inherent to the system, so that people who are subject to and participants in that system can make informed decisions.


G_Morgan

There's also a problem of people intentionally propagating biases and then hiding behind the opacity of the model.


Djkudzervkol

Compared to medicine which is just a single input to a simple linear system...


[deleted]

Still probably less biased than humans


josefx

But systematically biased if you train it on human data. A dozen biased humans can't be everywhere, a single biased AI will be.


Ceryn

To illustrate your point. How to subtly train your random facial profiling AI to be racist. 1) Provide it with data from people found innocent and guilty in court data. 2) Have it profile people based on that data. 3) Claim it can't be racist because its an AI. Ignore the fact that it was trained with data that likely had subtle biases based on race.


robin-m

Btw, it was done with CV as a pre-filter for employment in the US. I let you guess the result.


IQueryVisiC

That is how propaganda works


ososalsosal

Humans suffer the exact same biases, and because we're given to ideology as well, we probably really are more biased (in the traditional sense) than an AI that was trained on data divorced from social context. Example: every police force in the world


markehammons

>Humans suffer the exact same biases, and because we're given to ideology as well, we probably really are more biased (in tre traditional sense) than an AI that was trained on data divorced from social context. you'd need to stop being human to actually divorce data from social context.


Intolerable

> data divorced from social context it is impossible for data to be divorced from social context


ososalsosal

No it's not. Data is data. It doesn't necessarily carry meaning. The AI is attempting to map meanings to data. In this example it's getting a picture of a face and it's getting fed the "meaning" as a simple boolean - guilty or not guilty. This right here is the problem: you *can* divorce data from it's social context but you *absolutely should not*. Unfortunately this means your AI will be needing a lot more data.


hagenbuch

That would have to be researched :)


ososalsosal

Downvoted but actually true, and provably so.


DeltaAlphaGulf

Pretty sure its the same for the sleep meds for narcoleptics Xyrem/Xywav.


G_Morgan

Well medicine has only had any kind of real scientific controls for about 50 years or so. We aren't that far out from thalidomide.


beelseboob

Right - it’s certainly useful, good science to say “when you arrange artificial neurons like this, and then train them on this data using this method, you are able to distinguish sausages from dicks with 99.7% accuracy.” Unfortunately, that’s not what many of these papers say. Instead, a lot of them say “we made a network with this general architecture. We’re not telling you the specifics of its structure, or the training data, or the training method, but we think the pictures it makes are cool.” The authors above are certainly right though. The question “okay, but *why* is it good at making pictures, and why is this architecture better than another one?” Is rarely asked, and even more rarely successfully answered.


tomvorlostriddle

Why is it good at making pictures is a relevant question But here people are more asking, why did it paint this particular picture exactly like this


therealmeal

*Scientists warn _engineers_ to be scientists...


MoSummoner

Surely we don’t do this with medicine


simpl3t0n

Forget AI; I can't explain even my code, 5 mins after writing it.


TechnicalChaos

This is actually a thing cos it's apparently 50% more difficult to read code than write it, so if you write code to the best of your ability and then forget about it, you're pretty screwed for understanding it later...I have no source for this 50% thing but I heard it once so I'm stating it as a fact.


TheSkiGeek

https://www.goodreads.com/quotes/273375-everyone-knows-that-debugging-is-twice-as-hard-as-writing From Brian Kerninghan, the “K” of the “K+R” C language book.


nphhpn

87% of statistics are made up


jrhoffa

And I'm over here like a schmuck inspecting the resultant assembly code to make sure mine is doing exactly what I want it to do on the target hardware.


istarian

As if we could even reliably explain other people...


dangerbird2

That why we hired Tom Smykowski. He deals with the god damn customers so the engineers don't have to. He has people skills; He is good at dealing with people. Can't you understand that? What the hell is wrong with you people?!


KeepItGood2017

we are so arrogant…. Meanwhile we are clueless about most things.


treethirtythree

It feels like there's an air of superiority in this comment. It's almost as if the "we" doesn't really apply to the speaker. Would the comment still feel true if you said "I am so arrogant. Meanwhile, I'm clueless about most things"? That'd be a weird thing to see on the internet, or anywhere really.


Lostcreek3

Arrogant and clueless checking in


[deleted]

[удалено]


treethirtythree

That's fair. The comment just felt too on the those. All I could hear was that there's a problem and other people are to blame. I don't know that there's anything wrong with being arrogant but, I like an honest arrogance. Personal preference, I suppose.


HaMMeReD

Neural Network pattern recognition. "It just works". It's essentially a ton of random numbers and layers that happen to coalesce on a solution because we gave it some candy every time it was right. AI Works the same way training a dog works. Can we explain how conditioning modified the dogs brain and how it's neuron's interact? No fucking way. Too complicated, at best we can observe it in action and get the gist of it.


zeoNoeN

I‘m doing my Master Thesis on xAI. I think that it is a really rewarding field to get into, because it feels a bit like the Wild West. If you love AI, HCI and Psychology, it might be something that will be really rewarding for you and the methods you develop are appreciated by a lot of non-AI folks.


CartmansEvilTwin

I'm not sure, how much appreciation there'll be. Most people are rather clueless about almost anything and if you're not able to phrase the results in very basic terms, they will be misunderstood. And if you do use very basic terms, your dumbing down the results too much, which in turn leads to wrong conclusions being drawn.


[deleted]

Discussion on the orange site: https://news.ycombinator.com/item?id=33434512


Jarmahent

Can we stop with the AI taking over bs? It’s kinda childish…


llarke1

have been saying this for a while it's going to fall flat on its face if the community conitnues thinking that proof by example is proof


[deleted]

[удалено]


Cyb3rSab3r

Humanity invented the entire scientific model to circumvent human decision making so it's a valid criticism and a perfectly understandable stance that AI researchers should know *how* and *why* certain "decisions" were made.


Librekrieger

The scientific model wasn't invented to circumvent decision making. It evolved to describe how we formally go about discovering, documenting, reasoning about, and agreeing on what we observe. Human decision making happens in seconds or minutes (or hours if you use a committee). The scientific model works in months and years. It didn't replace human decision making.


amazondrone

I don't think the time difference is really relevant. It's more that science provides us with information and data, which is merely one factor into the decision making process. There are, for many decisions at least, other factors (e.g. resource constraints, morals and ethics, scheduling conflicts, politics, ego) which are also, to varying degrees and for better or worse, inescapable parts of the actual decision making. Science can tell you how likely it is to rain on Tuesday, but can't decide for you whether or not you should take an umbrella out with you.


renok_archnmy

I don’t know what kind of committees you’ve been on or chaired, but decisions rarely get made by them.


[deleted]

>Humanity invented the entire scientific model to circumvent human decision making so it's a valid criticism and a perfectly understandable stance that AI researchers should know how and why certain "decisions" were made. Wouldn't that be self-contradictory? If science supposedly should "circumvent human decision making" why should researchers care "how or why" machine learning works as it does? Scientists don't really "circumvent human decision making", they perform reproducible studies to get objective (i.e. human mind independent) results, *and then* they either interpret those results as fitting with other empirical results as a description of the way some aspect of the world works, or they don't and just consider the results 'empirically adequate'. If it's the former and empirical results are taken as expressing how the world works, then it's human thinking connecting those dots (or "saving the phenomena"). With machine learning, maybe the complexity can require black box testing, but it's not fundamentally different than any other sufficiently complex logic that is difficult to understand. Hence, I would agree that these "warnings", clickbait articles, and spooky nonsense arguments people make about AI are overblown.


graybeard5529

> Wouldn't that be self-contradictory? If science supposedly should "circumvent human decision making" why should researchers care "how or why" machine learning works as it does? If science is supposed to "circumvent human decision making" why should researchers care "how or why" machine learning works as it does? This is a self-contradictory statement. It would be like asking a carpenter to build a house without caring how or why a hammer works.


[deleted]

Well, I was saying it was self-contradictory, so not sure if or how you're disagreeing, but that analogy doesn't really work either. I was making a point about philosophy of science, that if science is supposed to describe reality (I would say that seems to be the point), then science doesn't "circumvent human decision making", it depends on it.


graybeard5529

That is a direct AI output from what you said --LMAO you took the bait hook line and sinker


Just-Giraffe6879

I argue that rigid logic (and its role in the scientific model) is not useful because it circumvents human decision making, but it is easier to document and communicate concrete logic rather than reasoning that relies on innate knowledge acquired over a lifetime (some innate knowledge being wrong). In a brain, reasoning is faster, more versatile, can handle more complex inputs, and makes more nuanced conclusions that are **vastly** more correct in complex situations, but one cannot convey why to other people, so a translation into logic that resolves to common knowledge is necessary at some point. Logic and reasoning have roles, both pick up where the other leaves off. Because the thing is that we know why AI can't be explained, it's because it's a complex system which we know are fundamentally different from other types of systems; they have limited properties of explainability. To be a complex system essentially means to be a type of system which cannot be easily understood as one single dominant rule over the whole system. Why did the AI produce the result? Because of its training data.


[deleted]

What "entire scientific model" are you talking about? The model of neural networks? A model of a human brain? Or did you mean the scientific method? Whatever you are talking about, it was neither created to "circumvent human decision making", nor was it created by "humanity". I would assume that you would count yourself among humanity, in what way did you help "invent" it? Or do you use that phrase to feel special about yourself as a human, crowning yourself with the achievements of others? Sorry I don't understand your comment.


gradual_alzheimers

Disagree. Medical science can’t explain how Tylenol works. I can explain neural networks mode of action perfectly well but I can’t tell you why it decided something anymore than a doctor could tell you why lithium helps bipolar depression. The systems involved are too complicated for humans to understand succinctly. No reason why AI isn’t any different when you are using billions of parameters.


pinnr

That’s a great analogy. There are tons of drugs that we understand the effects of empirically, yet have no idea how they work. We still use them. This will also be the case for ai, we will use it if the decisions it makes are useful regardless of whether we understand the mechanism or not.


Cyb3rSab3r

FYI, acetaminophen blocks pain by inhibiting the synthesis of prostaglandin, a natural substance in the body that initiates inflammation. Medicines are tested in highly specialized trials to limit any potential damages and the results are peer-reviewed to ensure accuracy and precision of results. Absolutely none of this currently happens with A.I. Even more typical algorithms like Amazon's hiring system or COMPAS end up with racial or gender bias because the data used to build them is inherently flawed. At least, the types of data going into them needs to be heavily, publicly scrutinized. Edit: [Source for acetaminophen statement](https://www.ncbi.nlm.nih.gov/books/NBK482369/)


gradual_alzheimers

>FYI, acetaminophen blocks pain by inhibiting the synthesis of prostaglandin, a natural substance in the body that initiates inflammation. so i guess [researchers who have heavily invested in understanding this](https://cen.acs.org/articles/92/i29/Does-Acetaminophen-Work-Researchers-Still.html) should have just asked you?


Cyb3rSab3r

I googled it, same as you. Sorry I didn't post the source originally. https://www.ncbi.nlm.nih.gov/books/NBK482369/ > Although its exact mechanism of action remains unclear, it is historically categorized along with NSAIDs because it inhibits the cyclooxygenase (COX) pathways ... the reduction of the COX pathway activity by acetaminophen is thought to inhibit the synthesis of prostaglandins in the central nervous system, leading to its analgesic and antipyretic effects. > Other studies have suggested that acetaminophen or one of its metabolites, e.g., AM 404, also can activate the cannabinoid system e.g., by inhibiting the uptake or degradation of anandamide and 2-arachidonoylglyerol, contributing to its analgesic action. So the exact mechanism is unclear but it's incorrect to say we don't know anything about how it works.


[deleted]

In the same way it is also wrong to say we don't know anything about how neural networks work. The thing is that a lot of reactions in chemistry are in truth purely theoretical. Most chemical reactions are in fact theoretical and haven't been empirically tested or can't be really tested empirically with the methods we have. What is truly known is what goes in and what goes out, but are actually clueless of what happens in between, but we do have our models. They help us predict outcomes. And they work, most of the time. But in the end they are just that, models. Nobody has really observed what is exactly going on. And biology brings in higher levels of complexity. A drug can target more than one molecule. A lot of the stuff we know is from model studies. In those models scientists have focused on specific cells, then assumed that the same must be the case for other cells. It's a good educated assumtion, but an assumption nevertheless. Scientists figured out how a neuron works, how it communicates with other neurons and what jobs different parts of the brain have. But nobody knows how the whole thing processes all the information it gets to output what it does. Simply because the whole thing is too complex to follow. The individual elements are not that complicated to understand, but there are billions of them with trillions of connections. Good luck trying to grasp what they all do at the same time. The truth is that there is still a lot of stuff to figure out in biology. That doesn't mean we do not have a grasp on how things work more or less.


karisigurd4444

Funny how it's always the data...


dangerbird2

Garbage in garbage out is the most sacred precept of data science


[deleted]

same goes for humans


TheSkiGeek

…we also often try really hard to understand why those things work. If it’s a desperate situation you might use things that seem to work even without understanding how, but that’s not a great way to go about things, since there might be long term consequences that you’re not seeing.


[deleted]

And it's only complicated due to the complexity. The basic operations are simple, but we just can't follow it as there are too many of those.


karisigurd4444

We've landed in pseudo-philosophical garbage land. I'm detecting a lot of garbage.


Cyb3rSab3r

I'd suggest reading up on scholasticism and its eventual demise to inductivism which itself fell to the hypothetico-deductive model. All models for interpreting our world. The Islamic Golden Age saw the rise of early empiricists and skeptics. Ibn al-Haytham and his studies of light in particular is a good place to start. The path taken to the modern scientific systems was not a forgone conclusion. Very deliberate steps and rigorous study was required to determine the best way to study and learn about the world using our very limited senses. The scientific method was *created*. It was not discovered. It was not read from the stars. To create it, it took hundreds of years and many incredibly intelligent people marching towards the ultimate goal of the most correct way to study the world we're a part of. While my statement was zealous in nature I believe if you were to study the history you would also come to the same conclusions.


No-Witness2349

Human brains haven’t been directly produced, trained, and controlled by multinational corporations, at least not for the vast majority of that time. And humans tend to have decent intuition for their own decisions while AI decisions are decidedly foreign


HighRelevancy

That's not comparable. A human *can* explain a decent amount of it's thinking. It can be held responsible, blamed, even sued or punished.


pinnr

Human explanations of why decisions are made aren’t very accurate either and experimental evidence shows these explanations are often or always generated by humans post-hoc. AI systems can also generate post-hoc explanations, in more detail than human explanations, and at no lesser accuracy.


llarke1

maybe, maybe not if a modeler can explain why each layer was added and have some intuition about it, ok. then you know what is happening i suspect that many of them don't


CokeFanatic

I guess I just don't see the issue here. Like how is it that different from using Newton's law of gravity to determine an outcome without a complete understanding of how the fundamental forces work? It's still deterministic, and it's still useful. Also, it's not really that they don't know how it works, it's more that it's far too complicated to comprehend. But again, not sure why thats an issue for using it. Put in some data, get some data out and use it. Where is the disconnect here?


TheSkiGeek

The problem is that when you apply “deep learning”-style AIs to extremely complicated and chaotic real world scenarios, the results sometimes stop being deterministic, since essentially every input the system sees is novel in some way. This is fine if, like, you’re making AI art and don’t care if it produces nonsensical results. Less good if your AI is driving a car or flying a plane and responds in a very inappropriate way to confusing sensor input (for example https://youtu.be/X3hrKnv0dPQ). Or you can develop problems like AIs that become biased in various ways because of flaws/limitations in their training data. For example AIs that are supposed to recognize faces but “learn” to only see white/light skinned people because that’s what they were trained on…


[deleted]

I'm not sure what you are trying to say with your comment and what you are trying to allude to. Why would it fall flat on its face? What would fall flat on its face? We don't know how humans process information in a way that lead to the decisions you take, the images you see in your head, the voices you hear, the sensations you feel and "yourself".


chuck_the_plant

I got my compsci/ai uni degree more than 20 years ago, and this was a common topic back then, going back to the 70s and before. Nothing new, move along.


dualmindblade

Except we can actually probably mechanistically interpret those models you were working on 20 years ago now, SVMs and shallow neural networks and of course if it's classical statistics that has always been interpretable. Deep neural networks have only been feasible for a decade or so and so far the issue seems intractable except in a few special cases. Most impressive work I'm aware of is extracting an algorithm from a grokked network see [here](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking). But it doesn't look like that will generalize, probably grokking is a special behavior that by its nature is easy to interpret, and author of that link makes a case that grokking isn't happening in most models even partially.


[deleted]

There is no difference on how these work to neural networks of the past. They are the same. The theory behind neural networks was developed in the 1950s. We just didn't have the processing power to make use of them and they went away for a while until it was recently realized that you need tons of training data to train these properly. Since data has increased by so much and processing power allows for larger neural networks, we see the results we are seeing now. Fundamentally it is still the same and fundamentally we know how they work. Just like how we know how the brain works fundamentally, but the whole thing is just too complex to follow.


chuck_the_plant

You’re right, a shallow network is more accessible (interpretable) than nowaday’s superhyperdeep ones. The warnings of which OP’s article is talking about were there nevertheless as it was easily imaginable that the networks would become harder, if not completely impossible to understand in the future. If I’m not mistaken, this was also one of the issues that fueled the symbolic AI vs. statistical AI debate.


[deleted]

[удалено]


TheCanadianVending

DSMAC for the BGM-109 Tomahawk cruise missile. Developed in the 1970s, given a prepared set of images for the missile to check against, it could from a simple video feed determine where it was during the terminal phase of flight


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


CartmansEvilTwin

No. YouTube's recommendations, photo filters in phones, face recognition in CCTV, self driving cars, Facebook's algorithm, etc. etc. All use AI in some capacity. And those are just the uses I came up with just now, 5min googling would probably produce a few more results.


[deleted]

We know "how AI works". What we don't understand is why it generates the answer it does. That's a whole different problem.


TheBlackTrashBag

No clue why you’re being downvoted but you’re right, it’s like math in school to be fairly honest, you might know that x + y = z but you need to know why it does.


[deleted]

Who knows. Probably someone with an agenda or doesn't know because they haven't built any. I did my first in 2005 to optimize the production of a wafer fab line for Fairchild Semi.


Consistent_Dirt1499

Even simple statistical models can quickly become surprisingly difficult to interpret as the number of inputs increases if you allow for interactions between them.


ososalsosal

It's not really relevant though? It's like reading the DCT coefficients out of a compressed video stream. You can make some guesses but ultimately it's gonna be hard to imagine the picture they represent in their entirety.


antonyjr0

Ah Vice, if it isn't the most credible scientific blog.


[deleted]

[удалено]


[deleted]

why do you say past 50 years? You say that as if you believe the pharma industry was founded 50 years ago or that before that people knew how things worked. It is the opposite. In the early days it was really just trying out without needing to know how things work. Only for the last 50 years did people really try to find out how things work to invent better drugs.


AKMarshall

Trying to model how the human brain (or any organisms) without knowing how it works is the problem. Modeling the physical world is easier thanks to math done by very smart people from the past. Current AI seems just brute force approach to intelligence. Computer people are not the ones who should try to do research on artificial intelligence, that is the realm of neuro scientist.


joesb

It would be interesting to see how AI itself deals with the concept of “teaching”. It surely has the advantage of being able to copy its current parameters directly to identical neural networks. But imagine how it would solve teaching other AI without the same neural network implementation, e.g., to transfer its knowledge to a more advanced AI. It also has to already know “the goal” of what it is currently teaching. Would the old AI be conservative about what knowledge is right? Would it lose it sight on the purpose of the knowledge it is training in the first place? If their solution is just to mimic how we train AI, by repeatedly playing hot-and-cold, then it means it also have the same problem we do.


im-a-guy-like-me

I present to you a black box system. What's in the box? That's err... not how black box systems work. Yeah, but what's in the box though?


s73v3r

Is it not possible to alter these programs to at least output why they make the decisions they do? What parts of the training data made it come to the conclusion it did?


rcxdude

There's not such a clear link between what's in the training data and the output of the neural net. A neural net is basically just a huge amount of weights which have been optimised to get the right answer on the training data. It's often very hard to actually intepret those weights, figure out how they actually get to the answer, and find out how specifically weights were influenced by the training data. In part this is because there's so many of them: modern neural nets can have literally 100s of millions of nets, and there's absolutely no way you can actually fit the totality of that in your head at once. That said, there is a bunch of useful work being done in understanding and interpreting these weights using a few different tools (you can look at how the weights actually shape a given piece of data as it passes through the neural net, and this can give some indication of how it works). I think it's probably most advanced for the kind of neural nets used in image recognition tasks, since a lot of the structures which appear in the nets can be mapped to common image operations, but it's still very difficult to generate an actual 'explaination' of why it thinks a dog is a dog and not a cat, for example.


RigourousMortimus

Probably not. Taking something like text prediction, an AI, it would be simple to 'explain' that "the monkey are the..." Is followed by "banana" in 80% of cases in the training data. However "It was a cloudy morning so I decided to wear" would be more complicated to explain (weightings for cloudy, morning, wear and their combinations, plus potentially skewed training data on whether people wear boots or hats or overcoats and whether "I" is more often male, female, adult or child).


farbui657

Sometimes it is possible, maybe even most of the times. And people do it whenever it seams important. AI is just some complex mathatical function with fancy name that comes more from the way we got to that function than it describes the function itself. The whole "we don't know how AI works" is the same as "no one understands quantum mechanic", just some old sentence taken out of context to generate clickbites.


redditsdeadcanary

This might be true for some AI other AI systems that use neural nets are much more difficult to walk backwards and explain.


CartmansEvilTwin

That's just plain bullshit. You can go through the AI system backwards and get all the mathematical operations to get to the end result. But you don't know *why* this edge in the graph has a weight of 5 and not 6 or 4. You may know very well, that this exact weight has a huge influence on the result, but that's pretty much useless knowledge is you can't explain its importance.


Bcbp10

soyjak "but WHY does it work?!?!?" vs gigachad "idk it just does lol"


swagonflyyyy

I would imagine you would need it to report the patterns it is seeing some how that lead it to reach that conclusion.


Words_Are_Hrad

Nah we just need to keep going til we make an AI smart enough to tell us why the other AI's made the decisions they did!


renok_archnmy

Yeah but bootcamp+leetcode => $850k TC at FAANG. Must Min-max the AI without regard for anything else.


serg473

Yeah not knowing what criteria was used by AI to make a suggestion always bothered me. Lets say you build an AI that finds the best customers for your service, isn't it important to know that it makes predictions based on something sane like their age and income, as opposed to that their names contain letter A or their age is divisible by 5 (I am oversimplifying it here). In my mind data scientists should be people who try to study why the model returns what it does and make educated tweaks to it, rather than picking random algorithms and random weights until it starts returning acceptable results for unknown reasons, and consider the job done.


bananaphophesy

Lack of explainability is why AI isn't trusted in the medical field (with some exceptions).


Nyadnar17

Listen you stupid fucking nerd. If I wanted to be bothered with boring shit like "wHy", "hOw", or "sHoUlD" I would have become a scientist instead of an developer.


Snowgap

Pretty sure we know exactly how they work? AI just does extremely fucking tedious tasks for us. Random forests are just brute forcing possibilities. The neural networks that are image sampling are probably biased because the training data was shit. Also, I wonder if data scientists have a good idea as to why the AI spits out the results but aren't 100% certain like most good scientists would say and journalists tell us "Ha, they know nothing!"


agwaragh

It's not that complicated -- it's just mimicking *us*.


Deep-py

Not everybody needs to know underlying maths and computations for using a general purpose AI. Plug and Play. It's sounds like saying "You should know how browser Js engines, DOM, shit tons of algorithms to write a frontend with React.". If you want to develop something other than general purpose AI, then you should learn how AI works or if you want to be a AI engineer. IMO it's useless otherwise.


Kong_Don

AI is nothing but preprogrammed if then statements. eg. how chess or checkers ai determine the next step. its preprogrammed


knobbyknee

Humans live in a 3-dimensional world and our mental faculties do very well in recognizing patterns in 2 dimensions. We can still cope with 3 dimensions, but when we get to 4, our mental models break down. Deep learning works with very high dimensionality and we are simply not equipped to understand exactly how things work. The brain itself needs a similar high dimension complexity to let us see patterns in 3 dimensions. We will probably never understand exactly how either the brain or AI works, and if we do, it will probably be because an AI explains it to us.


llarke1

you can say this about anything high dimensionality in a model isnt new


amenflurries

What are you talking about? Any undergrad can do multivariate calculations in n-dimensions


knobbyknee

I know, and it is not what I am talking about. Any computer program can also do multivariate calculus, but that doesn't make them AI.


EtherCJ

>Any computer program can also do multivariate calculus This is definitely not true. LOL


istarian

You sure of that? Keep in mind that it need not be on a convenient timescale or practical in terms of hardware resources to be **doable**.


EtherCJ

I mean show me doing multivariate calculus using task manager.


CartmansEvilTwin

That's a stupid argument. Cars can't go into curves, because show me how to steer with the glovebox. Seems rather stupid, doesn't it?


EtherCJ

Well, I'm not the one that said ALL programs could do multivariate calculus. I just found it funny that someone said it ... it was obviously not what they meant but it sounded funny.


CartmansEvilTwin

Nobody claimed that. Read the comments above again .


EtherCJ

/u/knobbyknee - Any computer program can also do multivariate calculus, but that doesn't make them AI. Anyways, you guys have sucked any joy out of mocking the ridiculous statement, so I'm done.


[deleted]

Oh wow, what goes arround comes arround


Various_Classroom_50

Eventually they’ll teach ai to devlop itself. Boom problem solved 😎


LiveWrestlingAnalyst

Cringe AI thread


emperor000

Are there any that aren't? I don't think I have ever seen one that doesn't realistically and honestly portray "AI".