I mean the person who said this is the CTO of OpenAI and an engineer working in this project. I would imagine she could be considered an expert.
I mean the person who said this is the CTO of OpenAI and an engineer working in this project. I would imagine she could be considered an expert.
this is clearly AI generated the boot laces look weird
I take it that this was social sciences because based on what I have seen so far I don’t think it can even outperform a college kid in maths
it still amazes me that google thinks unchecked random information from somewhere in the sea of internet can be a reliable source. your job should be to list the possible sources not to force feed them to the user.
%100 me when I first started github: “welp its saying something I dont understand, time to nuke the local copy and restart”
Correction, I buy your products there for my computer has virus
Can create faces that have never existed, but can you guarantee that the child in a CP that it has created does not look identical to a child that already exists? after all it can very well produce something using children directly from or very similar to its training set.
yea well there is no way to guarantee that AI wont spew out CP where the child there looks exactly like a child that it has seen in its training set, i.e a child that really exists. so no go
better yet upload gigabytes of senseless text and photos and let Microsoft train their AI on that
proceeds to justify the cost of unpaid peer reviewed digital publishing using pie charts and bar plots
I guess the end result would be the same. But at large the economic system and human nature would be to blame which is actually what I am trying to blame here too, not AI but people in power who abuse AI and steer it towards hype and profit
nope doesnt help, imma get my flame thrower nevertheless
“By involuntarily uploading your data to onedrive you also agree for it to be used in training AI models”
“oooo books he must be really smart”
For instance, I would be completely fine with this if they said “We will train it on a very large database of articles and finding relevant scientific information will be easier than before”. But no they have to hype it up with nonsense expectations so they can generate short term profits for their fucking shareholders. This will either come at the cost of the next AI winter or senseless allocation of major resources to a model of AI that is not sustainable in the long run.
deleted by creator
I am not denying the positive use cases being employed now and possibly being employed in the future. I am not opposing the use/development of AI tools now and in in future too.
However the huge negative possibilities are very real too and is/will be effecting humanity. I am against the course big AI companies seem to be taking and against the possible future allocation of most of major tech innovations to their cause.
It is of course very hard to predict how the positives and negatives will balance out but I think big tech companies don’t have any interest in balancing this out. They seem to be very short sighted for anything other than direct profits. I think they will take the easiest way to more profit/AI dominance which is a short term investment. So I am not very optimistic on how it will pan out. Maybe I am wrong and like computers it will open up a whole new world of possibilities. But the landscape then and landscape now is also quite different in terms of how big tech companies and richest people act.
Onion? No? Great…
Next up “researchers wondered what would happen if they put AI to a machine that can consume humans for energy”