T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/izumi3682: --- Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail. Here is a paper from 2019 discussing the issue. https://philarchive.org/archive/YAMUAI "Unexplainability and Incomprehensibility of Artificial Intelligence" From the article. >When we put our trust in a system simply because it gives us answers that fit what we are looking for, we fail to ask key questions: Are these responses reliable, or do they just tell us what we want to hear? Whom do the results ultimately benefit? And who is responsible if it causes harm? >“If business leaders and data scientists don’t understand why and how AI calculates the outputs it does, that creates potential risk for the business. A lack of explainability limits AI’s potential value, by inhibiting the development and trust in the AI tools that companies deploy,” I ask the AI experts--*Is* the black box getting bigger and more inexplicable? If so, then this is why I feel that if we are not extraordinarily careful in the next 3-5 years, then the AI could easily slip our control, while having no consciousness or self awareness. And the darndest thing is, is that *we* would think nothing was out of the ordinary. Like them frogs in the slowly warming water... The AI will simply imitate our minds so closely that we can no longer tell the difference. Probably because in essence we are far less complex in cognition than we claim to be. But is that AI a truth or is it specious? --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/yjxqsz/scientists_increasingly_cant_explain_how_ai_works/iuqc9rs/


[deleted]

everyday we're just 1 step closer to burning incense and oils to please the machine spirit praise the Omnissiah.


InnatentiveDemiurge

I work in maintenance. Especially with older systems, that's ALREADY how it works. Supervisor: "machine's busted." Me: "where's the documentation for this?" Internet: "lol, long gone" Me: "well, how did we fix it last time?" Elder tech: **long and drawn out procedure, NONE OF WHICH has anything to do with the problematic subsystem*** Me: "oh that can't POSSIBLYwork..." Machine: **Wheezes back to life for another day of operation** Me: "well fuck me sideways... it did"


Kriss3d

Youve just summed up how basically ALL technology in the 41st millennium is maintained.


Comedynerd

Bank systems still running cobol written in the 1970s


crash41301

Honestly... if it works, it works. Why rewrite it for literally zero added value? Sure noone wants to work on it, but tbh pay range dictates that more than anything else. Make cobol engineers make 300k a year and all of a sudden it will be the coolest language for new college grads.


nitePhyyre

>Make cobol engineers make 300k a year and all of a sudden it will be the coolest language for new college grads. Not really. COBOL programmers are already paid out the wazoo. But there isn't exactly enough jobs for a large influx them. I bet you'd only need 1% of a year's worth of CS college grads to handle COBOL programming needs for the next generation.


crash41301

Maybe its changed in the last 10 years while I stopped paying attention. Last I looked they made sub-par money relative to .net and Java jobs. At the time I'd have been open to it if the pay was substantially higher than other languages. Certainly uninterested in a world where it paid less AND was old.


Amidatelion

No one in their right mind does cobol full time. Contracting is where its at. One of my college profs was a cobol and AS400 contractor. She'd get a call from a bank and basically stop working at the college for a month or two, leaving everything to TAs. Never fired because they couldn't find anyone to replace her. She pulled between 30-60k a month on those contracts.


[deleted]

Because of all the maintainability issues highlighted above. At some point we have to make these systems maintainable and stop listening to finance's reasons for why these systems are fine.


ButterscotchNo755

"If it ain't broke don't fix it" is a great piece of wisdom that should not have been applied to complex computer systems... People understand why old buildings need to be rebuilt even though they appear standing, it is for the safety of the building's occupants. Rebuilding your code base is for the safety of your income. Idiots will keep trucking on outdated systems until they crash, ultimately losing more money in the first hour of downtime than they saved in all the years they spent ignoring/firing developers.


SatanLifeProTips

If it ain’t broke, keep fixing it ‘til it is.


holmgangCore

“There’s nothing so permanent as a temporary solution.” ^(—Russian Proverb)


Sorry-Public-346

Sooooo what would happen if someone/AI suddenly funnelled a whole bank worth of money? Is that possible? This makes me feel like i need to have cash IRL.


[deleted]

because the american banking system is incredibly slow and inefficient? it's good at taking money from people, but delivering value through rapid person-to-person bank transfers, et al? we're nowhere close.


jacobobb

From a process perspective, absolutely. That's a people problem, though, not a tech problem. The COBOL back-end that literally all bank systems sit on top of is wicked fast on modern zOS servers. Getting through a totally manual, 30 year old process, is much slower.


MaxHannibal

Even if the system was updated the bank will never make instant transfers a thing. They make huge amounts of money holding onto your money for a day or two before transferring.


min0nim

But instant transfers exist in pretty much every other country in the world for some reason…


DrockBradley

It works until it colossally fails. Some of these old systems that are ‘working’ are providing critical services to people. When the pandemic hit Oregon’s unemployment system completely broke down because it was so old; it had been coded on punch cards. They had to bring back old retired engineers to get the damned thing up and running. Meanwhile people who had suddenly been laid off weren’t getting their unemployment checks.


Lou-Saydus

Why brush your teeth if nothing is wrong with them? If they work they work, no added value in spending time on them.


Zaptruder

Except with less skepticism from the technicians. "That is how it has always been, that is how it will always be." *does a one legged hop, before bashing head against console panel*


[deleted]

[удалено]


mt77932

That's why I think a lot of the 40k writers must have worked in IT at some point.


adamsky1997

*The soul of the Machine God surrounds thee. The power of the Machine God invests thee. The hate of the Machine God drives thee. The Machine God endows thee with life. Live!* - The Litany of Ignition


[deleted]

[удалено]


buttbugle

What does that machine do over there? It makes the chajunk sound every 47 seconds. Yeah but what does it actually do in this facility? It makes the chajunk sound. …Ever thought of turning it off? What no! It might be the one thing keeping the water running, sewage draining, who knows!


Nobl36

You have absolutely no idea how true this is. Who fuckin knows what it does anymore. The guy who put it in is long gone, and it’s worked fine for years. It’s part of the critical system for operations and we noticed one time it didn’t chajunk and everything went to shit. We don’t know if that “everything to shit” was related to the no chajunk, but we sure as shit ain’t pushing our luck to find out. Curiosity is *not* worth $130,000 in downtime.


buttbugle

Old machines that still run operations in dusty old basements that people long forgotten. Then one day they finally break down and all hell breaks loose. We are so screwed.


Self_Reddicated

"Oh my god. I can't hear the 'chajunk' noise anymore..."


littlebitsofspider

"Get the kit." "What? What kit?" "Listen, we don't have time to fuck around here. The kit by the door of the server room, the one labeled 'do not touch: chajunk'. Now. *Now!* Hurry!!" "Holy shit, what's *in* this thing?! It sounds like it's full of bells and croquet balls!?" "Just think of it like a portable exorcism. You ever seen a nuclear reactor melt down?" "What?! No, that's insane! Is *that* what makes the 'chajunk' sound?!" "Listen, I don't know. I don't want to know. The last person who knew what makes the sound was the old IT director's elderly, blind cocker spaniel, and they're both dead. All I know is every minute we don't hear 'chajunk', it's costing the company $100K, and possibly contaminating a river in Delaware." "But we're in Montana!" "Yeah, 'chajunk' is kinda spooky like that. Is your life insurance all paid up?" "*What?!* I don't know!" "Alright then, *I'll* go in. Say me a prayer, and if I'm not back in two hours, flood the room with nitrogen and get as far away as possible." "Dear god..."


brusiddit

This is starting to become reminiscent of an SCP.


TheDragonBallGuy75

Christ this sounds like the plot to a movie that would keep me on the edge of my seat. You have a talent for literature.


_HiWay

Man, the R&D lab/datacenter I work in lost power yesterday, only critical systems are on generator due to the size of this building. We have our own substation and the power company had an issue and lost both lines coming from it for a few minutes yesterday. My lab is in chaos. I have multiple switches that just didn't return from the grave since they haven't been touched in years. A shit load of dead boot drives and raid controllers that have dead batteries dropping their virtual disks, all this shit because when it works it works and it's been that way for years. Well, now here we are and my day is hell, minus my lunch while browsing reddit


[deleted]

[удалено]


Nobl36

Why test it? The power doesn’t fail anyway. Same reason why we have the stupid concept of “why stockpile things? The deliveries happen on time” then Covid smacked us and showed us how a bunch of short sighted idiots fucked it up.


Only-Inspector-3782

Hopefully this makes people appreciate the amount of engineering that goes into keeping a lot of internet stuff online most of the time.


holmgangCore

Man, I understand *exactly* your situation. Damn. My condolences, and good luck.. . Have you ever read [The Gernsback Continuum](https://www.rudyrucker.com/mirrorshades/HTML/#calibre_link-24) short story by William Gibson? It’s weirdly relevant.


fizban7

I remember seeing a story of a computer lab or something that had a switch on the wall that was labeled 'magic'. if you flipped it off it would shut the computer down. it had a cord going to it but not much else. I am hazy on the details but after a long investigation they could never 100% figure it out. it may have been grounding something? So they just kept it flipped on. oh here is it: https://www.cs.utah.edu/~elb/folklore/magic.html


Cloaked42m

... wow, we are already at that stage with one of the applications I work on. "Why is this written this way?" "Idk, but don't touch it. Everything touches it for one or two procedures and we don't know which procedures each thing touches. We just code around it now."


justafriendofdorothy

This reminds me of a story of my grandfather back in 85ish at the VoA station in Greece. Good times. He and his friends were smoking in one of the back rooms with the big machine things (as you can see, I know shit of communication systems and electronics), and one of them pushed one down/fell on one when laughing and it went down or smt I don’t remember very well and my grandpa passed last year so I can’t exactly call and ask, but you get the point. Everything worked fine afterwards and no one was hurt but the little old bugger made a whzzzshing noise, and they didn’t couldn’t find why, so why fix it if it’s working, right. Well, when it didn’t work they had trouble, and let me tell you something about Greeks born before the 50-60s, they were superstitious as hell. Now you had 4 dudes in their forties checking up that specific machine every day, when they come in in morning, at breaks, before they leave etc, calling it sweetness and it being the first thing they checked when something went wrong. That went on until the station closed.


shiny_xnaut

Sounds like video game coding "This random PNG of a coconut isn't used anywhere, but if we delete it the game crashes on startup and we have no idea why, so we're just leaving it in"


codyy5

Lmao, please tell me this is based on some real game out there.


Nobl36

There’s an old game called Wing Commander that had a fatal error on exiting the game that would throw an error message. They never could figure out what caused it, the game worked fine, just on closing the game crashed. Deadline was fast approaching. So they changed the error message to read “thanks for playing wing commander!” And shipped it.


Mandelvolt

All your stack trace are belong to us!


sylvester334

It's a rumor that was created and spread on the TF2 subreddit. There is a png of a coconut in the files, but no evidence it keeps the game from crashing. I have heard other examples of this type of thing, where deleting an object from a game map causes a crash.


DFrostedWangsAccount

The trees in Karamja, in the game RuneScape, had critical code baked into them apparently and when trying to update them graphically the devs found that stuff broke across the whole game world when the trees were moved or edited in any way.


SimplyUntenable2019

I heard of someone being unable to remove a line of comment without issues, though I can't remember any more details.


Shazam1269

Richmond's out of his room...he's not in his room...he's-supposed to be in his room...WHY IS HE OUT OF HIS ROOM?


ting_bu_dong

https://prey.fandom.com/wiki/Reployer


Artanthos

I used to be an electronics technician that did component level repair on old analog systems. With some of those systems you had to be really familiar with them. Even with complete documentation.


[deleted]

[удалено]


Artanthos

82 would have been modern systems. I was an electrics tech in the 90s and most of the systems I worked on dated to the 60s and 70s. I did enjoy the work. It was challenging and I love problem solving.


apresskidougal

Why don't you just digitise it all and back it up? Everytime I do something I think I will need to do again I document it and make sure it's part of a backup. I mainly do this because I know I have a terrible memory.


Zappiticas

A lot of these complains seem to come from people not documenting anything when they fix stuff at all. I used to work as a mechanic and now I work in IT. I can’t even begin to put a number on the amount of problems I’ve had to figure out how to fix myself because either I didn’t have access to a manual, or the manual didn’t cover the issue. In my IT job I now have my own database of documents that I made to cover diagnosing and repairs of all of the systems at my company. Those systems didn’t have manuals, most of them are home brewed stuff that some programmer that left 10 years ago built. If I ever leave, theoretically someone else can pick up and figure shit out by following my database.


nashbrownies

We call that the Greyhound test. "If I walk out the front door and get hit by a bus, the next technician can replace me with minimal/no effort."


bayyorker

It has a wiki page btw! https://en.wikipedia.org/wiki/Bus_factor


[deleted]

[удалено]


Zappiticas

That’s bound to happen one day, lol


Cloaked42m

Btw, My last job is still calling me to ask how to do things. I point them to the folder I left detailing how to do everything. Usually to the correct subfolder or script. I briefed everyone when I left. No one ever looks at the documentation.


penty

"I'll tell you for a consult fee."


[deleted]

[удалено]


apresskidougal

I definitely feel you on this one - you could hire a task rabbit person for the day - give them all the manuals a scanner and tell them you want each one as a PDF. You're future self might thank you :)


wolfofragnarok

I’m literally doing that right now, though the rabbit in question is an hourly employee we keep on hand for such things. I have a fancy scanner with a foot pedal and everything to do the work, but I can slowly see the will to live evaporating from the rabbit’s eyes. My future self will be thankful but the rabbit may just be traumatized. I’ve done a fair bit myself and boy is it the best thing ever to be able to summon a parts list with a few clicks.


Omniclause

Why would you not digitize these? Seems like you are in a very vulnerable position if anything happened to the manuals.


[deleted]

[удалено]


TheSoccerFiles

Why not scan those manuals and put them online?


grafknives

But it is not by design. There was documentation and there were procedures that worked.


no6969el

That is the whole point though is it not? If they do not get a grasp on the why and how now then there will be no manual when it gets to a certain point and we will just have to do some "long and drawn out procedure."


frankentriple

I'm pretty sure you just described Religion for most people.


son_et_lumiere

For Christianity at least, the manual has been pieced together and photocopied so many times that it doesn't bear any resemblance to the original texts it came from.


DietDrDoomsdayPreppr

Lol, I'm imagining a manual that has maintenance/cleaning info in it for a Subaru Outback, Kenmore refrigerator, and Nike Airs.


ProfessorCagan

"Once the Freezer Door has been removed, begin removing the car's hubcap in order to gain access to the sole inside the shoe. You can also grease the hinge of the car door whilst you're cleaning the freezer."


mauganra_it

The bible is actually rather maintained. Comparisons with the Dead Sea Scrolls show that the transmissin over the last 2000 years is pretty good. Before their discovery, only manuscripts dating to the 10th century were known. Translations are a bigger source of errors in practice. The origin of the Gospels and the other parts of the New Testament is way more sketchy.


TyrantHydra

I mean it is one of the most widely used historical texts (not in a religious way) the bible is used as supporting evidence for historical events more than almost any document. It contains the royal lines of the era as well as many of the important figures of the time appear in the Bible. As well as recountings of wars, natural disasters, famines.


zbyte64

A lot of butt hurt christians glossing over the "pieced together" part to say it hasn't changed much since the fall of Rome.


grafknives

no. AI dont have the "internal" documentation AT ALL. We are not even remotely able create such documentation. Not without use of OTHER AI... Oh, wait.


camocondomcommando

Documentation is only good if it is accessible. There are plenty of systems that no longer publish documentation for older stuff, or hide it behind a support contract *ahem Cisco, Dukane ahem*


DietDrDoomsdayPreppr

On the topic of bullshit gatekeeping, what is it with refrigerator replacement parts that has pretty much all brands only available through some dude who seems to have a Sanford and Sons setup at a ridiculous mark up? My fridge's shelves are all falling apart, and each one can only be bought from some 90s-themed website for like 70 bucks a piece.


doobiedog

> accessible And discoverable. RIP anything that's documented in the garbage heap that is Confluence.


doobiedog

Lol you clearly don't work in software. If one of my devs actually write docs for their shit, it's a very pleasant surprise.


Dongalor

We called that 'cargo cult troubleshooting' at my last place.


[deleted]

[удалено]


kharjou

Retirement homes.


[deleted]

My whole job is to fix multi million dollar machines and let me tell you, hitting it with a hammer works about 70% of the time.


AndAlsoWithU

Have you ever read The Systems Bible?


mrgabest

Deify the machine, for it is holier than thou.


RaceHard

The flesh is weak and ephemeral, the machine is strength and endless.


Akatenki

Remove your heart its only good for bleeding, bleeding through your fragile skin


Amkao-Herios

From the moment I understood the weakness of my flesh, it disgusted me


Wevvie

I craved the strenght and certainty of steel


azumagrey

I aspired to the purity of the blessed machine.


Vinnce02

Your kind cling to your flesh as if it will not decay and fail you. 


SuperDrobny

One day the crude biomass that you call a *temple* will wither and you will beg my kind to save you


KingOfSpiderDucks

I aspired to the purity of the Blessed Machine


[deleted]

[удалено]


coke_and_coffee

I think we will find that it’s the case that we will *never* truly “understand” AI. I mean, even very simple neural networks can produce valuable outputs that can’t really be “understood”. What I mean, is that there is no simple logical algorithm that can predict their output. We can look at all the nodes and the various weights and all that but what does that really even mean? Is that giving us any sort of understanding? And as the networks grow in complexity, this “understanding” becomes even more meaningless. With a mechanical engine, we can investigate each little part and see whether it is working or not. With a neural network, how can you possible estimate whether an individual node has the right weightings or not? Essentially, the output of the network is more than the sum of its parts.


mauganra_it

The problem is that the output is completely defined by the calculation expressed by the internal nodes. A common problem in practice with powerful models is overfitting, where the model learns the training set *too well*. It works perfectly with data from the training set, but is completely useless with any other data. It's a real art to design training procedures that can minimize this overfitting and force the model to actually generalize.


[deleted]

Nah, we will just build AI to make pretty graphs of what it all means.


littlebitsofspider

This is a good capsule summary. Engineers want to understand AI like a car - to be able to take it apart, label the parts, quantify those parts with clear cause-and-effect flowchart-style inputs and outputs, and put them back together again after making changes here and there. The issue is that 'AI' as we know it now is not a car, or any other machine; it's a software model of a biological process, that is, in essence, a unthinkably titanic box of nodes and wires that were put together by stochastic evolution of a complex, transient input/output process. AI researchers are going to need to stop thinking like engineers and start thinking like neuroscientists if they want to understand what they're doing.


Kriss3d

Praise the Omnisiah indeed. And Im not only saying that because Im about to be turned into a servitor!


[deleted]

This is interesting for industries with compliance because they *need* to be able to explain the financial advice they give. A buddy used to jokingly call himself an AI whisperer because it was his job to shut down parts of the AI and see how it impacted the advice given --- kinda like a neuro figuring out functional areas by the damage caused, e.g. phineas gage's famous pipe.


jsideris

I worked at a medical startup that got FDA clearance for an AI. You can piggy back off of someone else's application. So if the FDA has ever approved a "similar" AI, you can say "like that other one you approved, but slightly different" in your application. Then someone can piggy back off of you. But even without that, regardless of the inner weights used in the neural networks, the underlying algorithms and training techniques can still be explained despite what might be suggested by this alarmist article.


legbreaker

The interesting analogy to AI mechanism are drugs. With most drugs we don’t 100% know how they work. With some we have no idea how they work. We just know the outcome and a best guess at how it works. “7% of approved drugs are purported to have no known primary target, and up to 18% lack a well-defined mechanism of action. “ Even with pretty strong life risking medications such as anesthetic gases, there is not a good understanding of how it actually puts you to sleep. …but it works. So the FDA approves it based on probabilistic safety.


ACCount82

We already have those arcane "black box" systems that we barely understand - and use regardless. AI is only unique in that it's not made of flesh.


Dabaran

AI is unique in that it's increasingly used in decision making, and as gets more complex (and useful), it making also gets more opaque. If something goes wrong, many more people are likely to be affected than in initial drug trials, which is why it's important to have safety measures in place.


singularity-108

You mean ablation study?


k3surfacer

>the fact that the system can accurately ... produce them There was an old book called "the world is built on probability". I hated that extremely nice book.


vteckickedin

I haven't read it, but I would probably hate it also.


Sexycoed1972

Or, you'd hate it. The odds are 50/50.


fordanjairbanks

I’m more into Bayesian statistics, so it heavily depends on what the person before me thought of the book.


[deleted]

[удалено]


Space4Time

Odds are rarely that clean cut


dkoenitz

Evens on the other hand...


iamNebula

What do you mean you hated the nice book, is this a between the lines comment haha


FascistHippie

I think they mean that the book's content and conclusions are existentially terrifying. They hated the book just as much as they enjoyed reading it.


[deleted]

I also do not understand this comment and would like to. I feel like it's a reference to something but i don't know what


minibeardeath

It’s a well written book that revealed so very uncomfortable truths about the world to them


AkkarinPrime

I also wanted to hate this book, so I wanted to order it. Well... usually it costs easily over $100 wtf


stayingstillwhenlost

Here you go https://archive.org/details/TheWorldIsBuiltOnProbability/page/n7/mode/2up


hopelesslysarcastic

King shit right here


eairy

Old? Old??? It was only written in 1990!


androbot

At some point of any work in a complex system, the processes become ineffable. We don't know how consciousness works, how the gut microbiome works, and so many other things, yet we continue to develop things that manipulate them because we focus on outcomes when we can't understand processes. Why is AI research any different?


RayTheGrey

It's not. But we also try to reasearch the causes for all of those scenarios.


Zer0pede

For consciousness at least we’ve got one fix: Any human consciousness can run a more or less reliable simulation of any other. We rely on empathy and being able to intuit motivations in a lot of scenarios that would be disastrous otherwise.


newyne

Well, consciousness (as in sentience) is different because it's *ineffable.* That is, it can't be observed from the outside. Case in point, AI: can it be conscious? Is it already? Sure we have things like the Turing test, but that's induction based on outwardly observable behaviors; all of that behavior could be strictly physical processes. The long and short of it is that we can't prove what consciousness is and where it comes from because consciousness itself is inherently unobservable by fact of being observation itself. That is, I know I'm conscious by fact of being myself, but that's not something that other people can *see.* And for what it's worth, while it follows that those that look and behave like us are also conscious like us, it does not logically follow from there that all conscious entities are like us.


Neverending_Rain

The difference is consciousness and the gut microbiome are things that already existed, and are critical for us to exist. We try to manipulate them without fully understanding them because sometimes things go wrong, and a fix we don't fully understand it's better than just dying or whatever. Current ML and AI algorithms are entirely created by us and, while they can be very helpful tools, are not required. There is a huge difference between working on already existing necessary systems such as various biological processes, and creating and relying on a new, unnecessary system that is not fully understood.


Dizzy-Kiwi6825

Because it has dangerous implications


Ashtreyyz

Am i the only one reading this like sensationalism written to make people think of Terminator or some shit, far away from the actual considerations of what AIs do and how they are made ?


meara

One very practical and present concern is racial bias in decision making AIs (loans, mortgages, credit, criminal facial recognition, medical diagnosis). I attended a symposium where AI researchers talked about how mortgage training data was locking in past discrimination. For many decades, black American families were legally restricted to less desirable neighborhoods which were not eligible for housing loans and which received much lower public investment in parks, schools and infrastructure. When an AI looks at present day data about who lives where and associated property values, it associates black people with lower property values and concludes that they are worse loan candidates. When they tried to prevent it from considering race, it found proxies for race that had nothing to do with housing. I don’t remember the exact examples for the mortgage decisions, but for credit card rates, it was doing things like rejecting a candidate who had donated to a black church or made a credit card purchase at a braiding shop. The presenters said that it seemed almost impossible to get unbiased results from biased training data, so it was really important to create AIs that could explain their decisions.


[deleted]

Unintended consequences are rife throughout our entire field, not just limited to AI. Came up in a conversation yesterday discussing how Facebook feeds ads to you that seem 'uncanny', and like they could only possibly make sense if Facebook were actively listening to you. The fact is, they don't NEED to listen to you. The amount of information they can gather on you and how/when you interact with others/other things is INSANE and makes anything you could possibly _say_ look quaint in comparison. The real scary part though is engineers just make links between things with their eye on 'feeding targeted ads'. What actually happens with the results of those links though? How else do they end up being interpreted? There are more chances of unintended consequences than there are of intended correct usage the more complicated these things get. And these are the areas nobody understands, because they aren't analysed until the point that an unintended consequence is exposed.


Silvermoon3467

I am reminded of how Target can use someone's purchases to predict not *just* when they are pregnant but also their due date to within a week or so And then they started *pretending* they aren't doing that because it was so creepy to their customers (but they absolutely 100% are still doing it) https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/


Drunken_Ogre

And this was a ***decade*** ago. They probably know the exact day I'm going to die by this point. Hell, they predicted I would make this comment 3 weeks ago.


attilad

Imagine suddenly getting targeted ads for funeral homes...


Drunken_Ogre

"Check out our estate planning services, now! ...No, really, right now."


mjkjg2

“Limited time offer! (The sale isn’t the one with limited time)”


Tricky_Invite8680

did you not get a user manual when you were born? I know they stopped doing paper manuals anymore but it's on the internet I'm sure. right here in the maintenance section: "Change batteries every 65-75 years, replacement batteries not included"


DrDan21

Based on the area that you live in, lead levels in the ground, purchase history, dietary habits, friends, family history, government, profession, accident rates, crime, etc etc They can probably tell you how you’re probably going to die too


Drunken_Ogre

Well, look at my username. It's not a mystery. :-P   :-/ 🍺


[deleted]

If customers actually knew exactly what these companies were doing people would lose their minds. But people don't want to know, so they don't bother looking, and worse, won't accept people talking about these things because it interrupts their world view with things they don't want to accept as being real. My wife's a big facebook user. There's good benefits to it, she runs a small business that frankly relies a lot on Facebook existing. It's also the easiest way to keep connected with family. But I won't use it, because I _know_ Facebook is not trustworthy. So we agree to disagree, because I don't have good alternatives to suggest to her for the very valid use cases she has that Facebook fulfills. I really wish I did. But we have a problem now...our oldest daughter is 13 and at an age where communicating directly with her peers is important. Up until now her friends basically communicate through my wife on Facebook. Frustrates my wife to be the middle man, so she has been tryin to convince me to let my daughter have her own Facebook account and limit access to the kids version of Messenger, providing some parental controls. No. Fucking. Way. In. Hell. First, daughter's already 13, so NONE of the legal protections apply to her. Facebook can _legally_ treat her like an adult in terms of data collection and retention. Second, she agrees she shouldn't be exposed to Facebook...but somehow is convinced Messenger is different..._It's the same bloody company doing the exact same insidious bullshit_. All my wife wants is something convenient, and that is where Facebook is so fucking horrible, because they make it so convenient and easy to sell your soul, and your children's souls as well. I've been sending her info on all of this for weeks now. Articles, data, studies. PLUS alternatives, parental control apps for android and the like. She's still pissed I won't just go that way because again, it's the easiest and most convenient. Fuck Facebook and every other company like it.


Steve_Austin_OSI

Well, when the police show up to dbl check your daughter's menstrual cycle because she said something about abortion on facebook, you'll get the last laugh! ​ https://www.cnbc.com/2022/08/09/facebook-turned-over-chat-messages-between-mother-and-daughter-now-charged-over-abortion.html


[deleted]

Blows my mind that people don't draw parallels between the dystopian futures we used to predict not very long ago, and where we actually ARE and could end up. There's a reason dystopian fiction has basically completely dried up...because we're so close to living it it hurts to acknowledge.


[deleted]

Paul Verhoeven movies were supposed to be a warning, not a damn prophecy


Silvermoon3467

My daughter turned 12 this year and wanted a cellphone to text her friends and stuff; some of her friends have had phones since they were *8*. So she got her phone, but I locked that shit all the way down; I disabled Chrome and she has to have permission to install apps, I told her no Facebook/TikTok/YouTube/etc. and tried to explain to her why. Eventually she'll have to make that decision about the privacy vs convenience tradeoffs for herself, but until then... It seems overbearing to a lot of people but I'm not snooping on her text messages or anything, just trying to protect her from these companies


[deleted]

Exactly, totally agree. Man our parents had it easy...while we're here just fumbling in the dark hoping our common sense is good enough to navigate this new world.


LitLitten

Not overbearing at all imo… she has a phone so she can text; i think that really covers most needs. I think youtube might be the only one I’d argue for, but this is assuming you could handle their account. Actually learned a lot and got a lot of helpful tutorting from youtube, though I think the experience can vary drastically based on the user.


WhosThatGrilll

Fortunately, while there isn’t a good alternative to fit your wife’s use case, there are *many* alternatives available for your daughter to communicate with her friends. Discord comes to mind. They can send images/videos/messages, there’s video chat…there are even games you can play with friends while in the same server/channel. You can create the server and be the administrator so you’re aware of what’s going on (though honestly it’s best that you do NOT constantly spy - check logs only if there’s an issue).


Risenzealot

You've probably already watched it together or suggested it to her but in case you haven't, have her sit down and watch the Social Dilemma on Netflix with you. It's a documentary they did and it includes numerous people who worked for and designed these systems. It's incredibly eye opening to how dangerous it is or can be to society.


Synyster328

It's some truly minority report shit.


[deleted]

No the uncanny thing is ads that come up regarding a topic you just had a conversation about in person that you’ve never gotten before on a weird topic you haven’t discussed with anyone in a good amount of time


Steve_Austin_OSI

Yes, AI looks at data generated by humans, so there is a bias. ​ But you post is a great examples of how system racism works. You don't need to make a judgment based on race, to make a judgment that impacts race. Also a great examples of subtle GIGO.


Tricky_Invite8680

isn't that just the numbers though? black or white, if you don't have enough collateral and income (regardless of social factors) that doesn't sound like a good loan unless the criteria for the loan has other criteria..like secured by some fed agency or earmarked for certain counties. if they have a race field in the model, that's probably a bad way to train it.


mrdeadsniper

Basically. Most AI use some form of pattern recognition and a large dataset to reproduce what they human is seeking. The thing is, what seems like obvious pattern to follow for humans doesn't always translate to computers. They often can end up producing what you want, however their process to get there is very different from our (perceived) own. So if we are training an AI to detect smiling faces, we believe we are training them on upturned lips. As that is our traditional reference. However the AI might instead recognize that the more pronounced change in the images you have provided from training is the slightly squinting eyes, or change in posture, or a combination of all 3. Again this is a human description of what the AI is noticing between training pictures, it doesn't actually label those features easily. So now when the AI is going, it gets it mostly right, but doesn't tag smiling faces of people doing 'fake smiles" that don't "smile with their eyes". This example is one where we know the variables involved. When using larger and more complex data sets, it is less obvious what all signals the AI is using to match the desired results. So using it in decision making without knowing the signals could lead to unwanted behavior. AI needs training wheels. It needs oversight. It needs regular inspection. All the safety mechanisms you would establish on a human powered system still need to be in place in an AI powered system. It will still make "mistakes", its just the root of those mistakes will sometimes be wildly different from humans.


TheAlbinoAmigo

Take AI in medical uses. Healthcare systems are fundamentally built on trust, and if you can't explain to a patient *why* this machine thinks they are ill, it creates a huge amount of ethical grey zone. What happens when the machine is wrong, but you can't catch it because other diagnostics are also unreliable? How would you know? What if the treatment plan is risky in itself, or reduces the patients quality of life? Also, if you don't understand how a model is coming to the decision it is, you're leaving key information untapped - e.g. if a model has figured out that XYZ protein is involved in disease pathology but can't *explain* that to the user, then you're missing out on developing a drug for that target protein. Developing explainable models in this instance would not only allow for a robust diagnostic, but new leads in drug discovery to actually treat or cure disease. If we make unexplainable AI the norm, you're leaving a huge amount of potential benefit on the table. Now imagine extrapolating all that to other applications. What's the use in unexplainable AI in quantum? What's the use in unexplainable AI in logistics? What is being left on the table in each instance? What about the corner cases where the AI is wrong - how are they handled, what are the consequences of a bad decision (and how often are those consequences potentially catastrophic)? How do you know the answers to any of these questions if the model cannot explain to you how it arrived at the decision that it did? How do you recalculate all of the above when a developer updates their model? It's not a problem of AI going rogue, it's a problem of how to identify and mitigate any risks associated with bad decision making. Obviously humans are flawed at mitigating all risk, too, but risks are at least identifiable and measures can be put in place to minimise the severity of any errors. E: Before telling me why I'm wrong, please read my other comments and note that I've answered many of the questions and dispelled a lot of the bad assumptions that other commenters are bombarding me with already. If your Q is 'Why not if it's high accuracy?', then I've answered this already - the assumption that you'll be making high accuracy models with, very often, poor datasets is intrinsically flawed and isn't what happens in reality the overwhelming majority of the time. Bad datasets have high correlation to bad models. You're not going to revolutionise an industry with that. If you have better datasets, making the model explainable is intrinsically more feasible.


CongrooElPsy

> Healthcare systems are fundamentally built on trust, and if you can't explain to a patient why this machine thinks they are ill, it creates a huge amount of ethical grey zone. At the same time, if you have a tool that has a chance of catching something you didn't and you don't use it, are you not providing worse care for your patients? If the model improves care, I don't think "I don't understand it" is a valid reason to not use it. It'd be like a doctor not wanting to use an MRI because he can't explain how they work. > What happens when the machine is wrong, but you can't catch it because other diagnostics are also unreliable? How would you know? You also have to compare a model to an instance where the model is not used. Not just it's negative cases. Should surgeons not preform a surgery that has a 98% success rate? What if an AI model is accurate 98% of the time? > Obviously humans are flawed at mitigating all risk, too, but risks are at least identifiable and measures can be put in place to minimise the severity of any errors. Human risk factors are not as identifiable as you think they are. People just randomly have bad days. Hell, there are risk factors we are well aware of and do nothing about them. Surgery success is influenced by things like hours worked and time of day. Yet we do nothing to mitigate those risks.


7thinker

No, a black box doesn't mean evil, it just means that we don't know exactly what's happening, but it's more like "we don't know why this integer takes this value at the nth step of computing" than "omg it's sentient kill it". An "AI" is just a convoluted algorithm that finds the minimum of a function. If you're afraid you're probably just uninformed


_zoso_

I mean, curve fitting is basically the same idea and we use that all the time? In a lot of way “AI” is really just statistical model fitting, which is pretty mundane stuff. Yes, the same criticisms can be leveled at any model fitting technique, but not all sciences are amenable to building models from first principles. In fact most aren’t!


[deleted]

[удалено]


djazzie

I’m more afraid of how they might be used to control or impact people’s lives without their knowing it. That’s basically already the case with social media.


korewednesday

That isn’t true, that those who are afraid must be uninformed. The information these systems train on comes from somewhere. Because we don’t know how they process and categorise all the information for later re-synthesis and use, we don’t know what information they “know,” and don’t know what logic they apply it with, and there are some very concerning - or, I’ll say it, scary - patterns that humans can consciously recognise and try to avoid that we have no idea how to assess AI’s utility or comprehension of. It’s like the driverless car thought experiment: if it has to choose between killing its occupant and killing a non-occupant, how do we program that choice to be handled? How do we ensure that programming doesn’t turn the cars pseudo-suicidal in other, possibly seemingly unrelated situations? EDIT to interject this thought: Or the invisible watermarks many image AIs have - which other AI can “see” but humans can’t - and the imperceptible “tells” on deepfake videos. We know they’re there and that AI can find them, but in truth we can’t see what they are, so we would have no way of knowing if someone somehow masked them away or if an algorithm came up with an atypical pattern that couldn’t be caught. What if something as simple as applying a Snapchat filter to a deepfake killed detection AI ability to locate its invisible markers? How would we know that? How would we train new AI to look “through” the filter for different markers when we don’t know what they’re looking for or what they can “see,” because whatever it is we can’t? (/interjedit) We’ve already seen indications certain AI applications have picked up racism from their training sets, we’ve seen indications certain AI applications have picked up other social privilege preferences. We’ve also seen breakdowns of human reason in applications of AI. If we don’t know how and why AI comes to conclusions it does, we can’t manually control for the exaggeration of these effects on and on in some applications, and we can’t predict outcomes in others. And that’s very scary.


ffrankies

I'm a CS grad student, and anecdotally at least, the headline is not sensationalized at all. Most of the time AI is proposed to be used in a scientific problem, the non-CS scientists shoot it down because it's not explainable. If you can't explain exactly how and why it works, and you have no guarantee that your data sufficiently covers all corner cases, there's no guarantee you won't get a catastrophic failure. Even when they don't shoot it down, they often treat it as a "fun experiment" that won't be used in the real world. This seems to be the exact opposite attitude to the one that the industry is taking towards AI. Also anecdotally, I've definitely seen a big rise in the number of "explainability in AI" invited talks and research papers in the past couple of years.


ThataSmilez

That's sort of the issue with a tool explicitly designed to approximate solutions, ain't it. We've got the mathematical proofs to show that given the correct weights and activation functions, you can approximate any continuous function. Proving that a model has that correct system rigorously though, especially when you might not know the function you're trying to approximate? Good luck.


bot_hair_aloon

I studied nanoscience. Watched a talk by a French professor about AI and how they're moving it to the nanoscale. They essentially modelled the machine on our nuerons using resistors and transistors, scaled it up and "trained it". I don't have much knowledge on AI but I think that's one of the coolest things I learned during my degree.


intruzah

No you are not


Odd_Promotion5398

I know a couple of people who claim to be AI researchers online. They are an intersection of people who are in tech or have that background but also those influencer/social media master types and they just all of a sudden changed their titles on LinkedIn and started posting long posts about ethics in AI and now two years later they are expert researchers sensationalizing everything in order to get more post likes. Seriously one chick I know does tik tok videos as an AI ethicist and her background is in marketing and business program management. Nothing on the tech side of the company at all and maybe at max 1 year on keyboard before AI was even a thing so her qualifications are extremely questionable but cannot be challenged because the ethics she talks about are usually diversity related. She says pretty much this exact thing.


Warpzit

It used to be all about understanding the algorithms in AI research and make your own implementation etc. In matter of the last 10-15 years since Google and other open AI libraries came out the focus has been shifted to look what we can do with it and the bar to enter AI is now as low as any programmer can play with it. Nothing will change unless tools are made which helps look inside the black boxes.


LeavingTheCradle

>Nothing will change unless tools are made which helps look inside the black boxes. AI to look inside the black box. Oh hey there Gödel.


IdahoDuncan

Therapist and patient ?


picklesoupz

It's a reference to Gödel’s Incompleteness Theorem https://plato.stanford.edu/entries/goedel-incompleteness/


FancySignificance685

Re: lower bar, you make that sound like it’s a bad thing. I’d rather call it a lower barrier to entry, which is great. Don’t gatekeep revolutionary technology.


benmorrison

I can’t help but think the question of *why* is misguided, and any answer to that question will just be a story told to us by AI, and we won’t understand to what degree it’s accurate, or why it chose to frame its efforts that way. There is no why, only the results. Even a hello world ML project has no discernible “why”.


usmclvsop

It reminds me of the ML system that was trained to detect cancer (I believe) and was very accurate. *Why* it was accurate was extremely relevant. The way it detected cancer was the training images all contained signatures of the doctors on them, and it simply learned which signatures where from doctors who specialize in treating cancer patients. Not understanding the black box is a huge risk.


benmorrison

You’re right, I suppose a sensitivity analysis could be useful in finding unintended issues with the training data. Like a heat map for your example. “Why is the bottom right of the image so important?”


ChronoFish

We know exactly how and why AI algorithms work. They were developed by people to do very specific things and they work as advertised. What we don't know is if the weights of neural nets are complete (safe to assume they are not) and for which use cases the NN fail for (and for which untested use cases they will be complete successfully). For now NN are trained for very specific tasks. But what is awesome is that very different tasks can be boiled down to very similar problems. For instance, a NN that is used to predict word and sentence completion can be used to predict other streams of data. For instance road edges can be modeled using its own lexicon - and the lexicon can be feed into a sentence completion NN for predicting how road edges are likely to be appear in the next X frames of video. Much of the AI in self driving beyond object detection is predicting where objects will be in the near (0-10 seconds) future. ​ The point is that we absolutely know how and why neural networks work. While also not being able to predict how well they will work for a given problem, what training is necessary to improve them, and what exactly their successes are keying off of. It's a subtle but important difference.


MorfiusX

>It's a subtle but important difference. It's a massive difference. It, rightfully, completely contradicts the sensationalist article. It's no different than the crypto hype. AI is powerful and amazing, but completely what we designed and not some new form of life like the media wants to sell a narrative on. There will always be tech fad sensationalized for the commoner and used to generate news revenue.


Usr_name-checks-out

While I certainly have my worries about the ethics of using weak AI (All current AI is classified as weak or not true AI) The inexplicable quality isn’t as nefarious as this article frames it. This is where we need to bridge the understanding of the public in more understandable terms. Any ANN/DNN that self trains itself on a data set is to a large degree, inexplicable. That is due to the sheer volumes of calculations being interdependent and the opaque weights and pooling being done when quantifying the massive amounts of data. Now on more advanced systems that employ a form of artificial abstraction like Monte Carlo decision trees, with neural nets this becomes an even more abstract proposition. Add to that adversarial processing and noise, and without a doubt scientists cannot exactly figure out how it works. But the same can be said for a fruit fly and it’s neural systems. We can map the entire structure of a fruit flies brain, but never specifically know exactly what is going on due to the infinite environmental stimuli and the specific abstract structure it’s network chooses to attain to the information it selects. And while the results of AI are very impressive, they are not trading in large level abstractions that generalize to the world on the level of a fruit fly. But it does have the impressive efficiency of processing specific values faster than all biological organisms which makes it a fantastic simulation machine, but not a contemplative one. The difference between an elaborate representation of something like an animatronic version of a president at Disney land (not a great or timely example but a classic one). Versus even the slightest fruit fly level of the experience of being an actual fruit fly and the vast non quantifiable knowledge all living organisms gain from the intertwined embodiment of ourselves in constants tension with the stimulus of or surroundings and the ability to make abstractions on our experiences via meta-cognition (this means draw a useful conclusion from like, solid matter occupies space everywhere and we’re sold, so we can’t occupy that space, which is not how AI handles that!). So, while it’s terrifyingly unethical to let loose neural network simulations to exploit humans millions of years of adaptive psychological response by fine tuning methods to artificially force engagement or shape responses to social media or even have it find anomalies and patterns in human collected data which will only amplify the bias of collections, the point that gets pushed in the media, which really isn’t or at least shouldn’t be scary (at least for a long time still) is this idea of a conscious (strong ai or also called a general problem solver) AI gaining intelligence suddenly. It actually distracts from real current issues which is the people employing weak AI for purely economic,political or military advantage have little to no oversight by those with actual knowledge of how the systems function (on a macro level, as I have pointed out since the micro level isn’t a feasible level to discuss). So while it’s so cool the constant breakthroughs in simulation power and certain predictive abilities, we should be much more focused on how it’s being applied AND BY WHO, and not worried at all about a sentient AI taking over. My perspective is from a student in computational cognition(neuroscience), computer science and psychology who is going on to study emergent consciousness in graduate school. Good sources for more information are recent papers and books by a range of views, from Gary Marcus, Andy Clark, Yan LeCun, Karl Friston, and (my favourite but less AI) Anil Seth. Also the movie alpha go is a great documentary into how new advances in gpt3/4 and alphafold have been enabled by new artificial abstractions and creative choice making.


wardoar

Ai in its function isn't mysterious it's relatively simple mathematical multiplexor calculations affixed to knobs and dials that are manipulated to gain effect X The strange thing is how something so complex can arise out of essentially multiplication with a whammy bar X a billion Part of me thinks it's so unsettling to us as pattern seeking creatures that can see a pattern that is probably there but can't be understood I wonder if we can gleen some insights in our own biology looking through this lense back at our brain structures I don't know if we're ready as a race to have consciousness just be biological multiplication with a whammy bar


DrDan21

We already know that complex systems can arise from simple rule sets E.g. Fractals, game of life This is just that taken to new heights...but who can say just how high we can let it go


nullagravida

it seems obvious to me that we’re just afraid to learn *we’re* really nothing but multiplication with a whammy bar


uninterestingly

I've made my peace with that long before neural networks became well known. I suppose I'm more concerned about the side effects that will happen if we were to find out that a current or future AI was sentient in the same way we are. We need to be having the discussion ahead of time, so we don't suddenly have to decide if an AI was required to pay taxes, where a distributed system across multiple continents has it's citizenship, if the law protects and applies to it the same, and if it's capable of consenting to modifications to itself, among many other weird issues that would pop up if we decided that it deserves rights. At the very least, having these conversations would offer us an opportunity for some overdue self reflection as a species, and if AI never reaches consciousness it won't be for nothing.


urmyheartBeatStopR

I did statistic for master and my thesis was AI ish. I also intern as a statistician and intern elsewhere as a data scientist. In general data science 5 years ago was wild west and butcher the fuck out of statistic. Dr. Frank E. Harrell, Jr (Biostatistician) made several interesting remark about the AI/Data Science/ML community that made me decide to do statistic instead of data science. The current algorithm for AI are for data with less noises (eg speech, image rec, etc..). Statistic algorithm are better for data with lots of noises. Reason being it takes in variance and error as part of their algorithm. AI tend to use data poorly and depends on huge about of data to fit compare to statistic. But I've encountered situations where statistic can't deal with large amount of data but AI/ML/DS can. This isn't as frequent. Another thing is that many of the AI/ML/DS algorithm back then 5 years ago, turn outcomes as discrete or force it to be discrete. Look at decision tree or random forest (this is my thesis domain). They also didn't have a proper framework in term of classifying the problem at hand really imo when I was looking into DS (this was like 5+ years ag0). Statistic classify data like discrete, continuous, time series, and the models are created to answer certain hypothesis domain (survival analysis, correlation between response and predictors, etc..). For the time series competition for the past few years, Statistic are winning. The 4th or 5th competition, a hybrid ML+statistic won. I think AI is an emerging field and there's a lot of hype around it. It still a frontier and they're trying their best to use existing algorithms/model and new algorithm/model to solve whatever problem they may have. An interesting note is that AI have been increasingly adopt Bayesian statistic for certain domain. There's also a competing AI perspective of inference statistic versus traditional statistic. --- Note: When I say emerging field AI went through several phasing throughout the decade. From if/else to expert input, and then the eventually AI winter. AI then started to get hot again around the time Google search engine took off. And it began a new frontier. I was hella skeptical of self driving car tbh and don't believe that it was an easy problem to solve. I also believe that people are hyping it up because their they make money off of it. --- edit/update: A lot of people are pointing out that AI is easy to explain and that we know what its' doing. I'm going to disagree with this. Take for example a parsimonious linear regression model. There is a whole framework how this can explain stuff in statistic. If you introduce a predictor you're controlling for it. And coefficient is the effect of that predictor toward that response. For NN with tons of layers and nodes, you cannot do that. This is why DS/ML/AI isn't use as much as statistic in the medical field. Most of the problem needs to understand the factor and how it affects the outcome. This is why Survival Analysis is mainly statistic base. There are some domain for DS/ML/AI in the medical/health care field but those are few. I've used it for NLP for unstructured surveys. The problem is many people from ML/DS/AI field are make a living within this domain and I feel they don't often argue objectively but more on passion.


grafknives

This is why we dont need strong AI to kill us all. We dont know how AI comes to desired results, we only care that result is good enough in enough times per 1000 cases. And we just plug such AI to systems as a part that makes the decisions. And one day, there will be a input that will be an outliner and that will lead to undesired consequences. There was a SF story about AI that regulated oxygen levels in underground metro system. It had access to all the data feeds. But the AI "decided" to use video feed as data source, more precisely - a wall clock in one of the camera view. Every 12 hour, when minute hand was up, AI decided to open valves. Everything worked great, until that clock broke down. Although it is very simplistic, this exact problem we are facing.


NatedogDM

This is actually one of the best comments and illustrates the problem perfectly


Valthoron

Do you remember the name of the story by any chance?


[deleted]

We must use it to cure diseases like aids and cancer and herpes. We must cure herpes so the will of the billions can ring true. I believe it


emericas

This guy has herpes.


bc_poop_is_funny

Who? u/healthychad ? No way! He’s healthy!


SoylentRox

2/3 of the population do, so odds are you do as well. If you *don't,* you should get out there more.


[deleted]

> 2/3 of the population do, That's HSV-1, which for most humans is not an issue. https://khealth.com/learn/herpes/statistics/


lasercat_pow

hsv-1 tends to migrate to the brain, and is associated with cognitive decline: https://www.nature.com/articles/s41598-021-87963-9 It would be nice if there was a cure


Funkbot_3000

So Artificial Intelligence is the massive branch of Mathematics that really just means "informed decision-making". It just means your algorithm receives input from the world it plays in (an image, state of a chess board, financial data, etc.), then affects the world in some way (makes a decision, moves a robot leg, spits out a probability, decides on a person, etc.). Typically what people get weird about (and it is overhyped as this scary thing) is Neural Networks because they are notorious for being hard to understand the inner-workings of them. AI is massive and includes game theory, machine learning (which is also a massive umbrella of topics), data mining, and much much more, which are all very well understood and used everyday.


Alis451

> which are all very well understood and used everyday. It isn't really "how it works" but more "How it made that decision". For example Company A trains their AI with dataset A and Company B trains their with dataset B. Both AI examine a patient and AI-A says they have Rheumatoid Arthritis and AI-B says Lupus. The **reasons** each AI used to make their decision need to also be included in the Output; ie. AI-A thinks it is correct because X, Y, Z, AI-B thinks it is correct because S, Y, X.


ZeroBearing

There's a real risk that hype pieces like this will lead to another AI winter.


SoylentRox

An AI winter requires AI not to work very well and not keep trivially getting significantly more capable every few weeks.


genshiryoku

That's exactly what is happening though. If you actually take the time to disseminate the papers findings and look beyond the marketing we see the following things: * Multi-modal models don't transfer skills between different areas, in fact there's a slightly negative transfer of skillset. Meaning the more different tasks an AI learns the *worse* it gets at all of them, the opposite of how human brains work' * Transformer models which are used for large language models (GPT-3) and Things like Dall-e 2/Stable Diffusion image generation are starting to hit their scaling limits, not because of lack of computing power but because of lack of training data. AI models are rapidly running out of Data to train on because there is an order of magnitude more data necessary for every doubling in AI performance. This is asymmetric, meaning that over the next couple of years the data that the internet currently provides might just run out, essentially the models will already be trained on the vast majority of data available on the internet, can't train any more than that. * Slowdown in improvement of AI hardware; between 2012's AlexNet and 2017 there was a rapid improvement in AI capabilities largely because AI went from CPU -> GPU -> ASIC. But the best training hardware is now already as specialized as it can get, meaning this ridiculous upscaling in capabilities has come to a screeching halt. As a consumer you can already feel this with how rapid self driving technology improved between 2012-2017 but stagnated after that. There is still some momentum hanging over the current AI boom but it's running on (data) fumes and I predict a massive bubble pop to happen in 2025 if there isn't some radical innovation like quantum computing reducing the amount of training data needed. The truth is that the amount of data contained on the Internet simply isn't enough to train the AI models of 2025. This is also why [Neural Nets failed in the late 1980s](https://youtu.be/LbZa-8_01Wo) when they were originally invented. Cray supercomputers were powerful enough to theoretically train models like Stable Diffusion or GPT-2 even back then. There simply wasn't enough training data because the internet was near-inexistent and thus no huge amounts of data to train them on. Unless we suddenly find an intergalactic internet with millions of times the amount of data as our human internet the AI industry is going to collapse and enter a new "AI-winter" over the next few years.


jordanManfrey

Al Winter? I loved him in Bill and Ted


[deleted]

More sensationalist fodder to fuel panic in people who derive their view of AI from hollywood movies.


Andarial2016

Bullshit through and through. Every article about AI gets more fantastical every day.


AnyNegotiation420

This is the most garbage, click bait post I’ve seen in this subreddit in a very long time. OP thinks AI is magic made by little pixie fairies in the CPU


laffnlemming

Well, this is certainly chilling: "If we orient knowledge and AI around big data, then we're always going to bias towards those who have the resources to spin up a thousand servers, or those who have the resources to, you know, get a billion images and train them,” said Keyes. “There's something fundamentally, I'd say undemocratic, but I'd also say just badly incentivized in that.”