Artificial Intelligence

This is for technical or philosophical discussion about what I think is one of the more important topics in our lifetime. That's saying a lot coming from me because I know this field is riddled with people who overhype or even fear monger about it. It is second only to cryptocurrency as the most annoying topic in software.

Probably the laziest way to define Artificial Intelligence is technology that can imitate the human mind. This is important because it means even a simple program like Akinator (which guesses what you are thinking) can be artificial intelligence, despite just being a decision tree. A program doesn't necessarily have to be dynamic or even very intelligent to be artificial intelligence. For the purposes of this discussion though, try to refrain from this "AI is basically just statistics" stuff unless you are willing to provide a meaningful argument.

It's important to note this thread is not just about machine learning or robotics. Those are sub-categories of artificial intelligence. You can really use this to bring up any interesting philosophical observation about AI, maybe some area you plan on studying, what you're working on, etc.

here are some starting topics:

1. 3D Machine Learning

Facebook released an extension of their Pytorch library that attempts to provide some useful functions for learning from 3d space or translating into 3d space. It doesn't work very well from what i've seen, but if you could create 3d environments from 2d images that would be a gamer changer, particularly for the 3d modelling industry.

2. Facial Recognition

May as well put this out there since it's like the textbook example of scary AI shit. I think it's fine to bring up political implications but don't use that as a chance to derail.

3. Neuralink

Elon Musk's stupid ass company that wants to put a microchip in your brain. Literally. It's like people's biggest fear about Bill Gates putting a chip in your brain except this time it's actually happening and everyone is cool with it. I claim this has some serious philosophical implications about AI.
 

Plague von Karma

Banned deucer.
3. Neuralink

Elon Musk's stupid ass company that wants to put a microchip in your brain. Literally. It's like people's biggest fear about Bill Gates putting a chip in your brain except this time it's actually happening and everyone is cool with it. I claim this has some serious philosophical implications about AI.
The part that's interested me most here is the brain microchip. The prospective benefits from the microchip are clear: mental illness could be, near enough, eradicated. Paralysis, deafness, mutism, blindness and other problems could also be theoretically fixed. However, this is only on paper...and any amount of scrutiny shows that there are numerous flaws in the idea.

Neuralink's microchip idea is a flight of fancy in its current state. Neuroprosthetic technology alone isn't that advanced, and I doubt it'll be even remotely plausibly usable for another decade. Not to mention the ethical implications (isn't it just masking a greater issue?) and an all manner of healthcare laws and regulations that it poses. Now, I may be ignorant of the greater idea, but the idea is to have a FitBit sort of thing in your skull..? And to kind of connect your brain to the internet? From a cybersecurity perspective, this just sounds like a recipe for disaster. Hell, how would it even work? Are there voices in your head? Intrusive thoughts? I'd definitely stay clear of it, it's just not shaping up to be a sustainable idea.
 
AI technological advancements do have to be handled with care, but I think a lot of sources fearmonger or overexagerate the chances of "AI takeover" or other implications. Ironically, Elon Musk is one of those fear-mongers, in my opinion, as previously stating he believes there is a chance that AI could very well overtake us by 2025 based on his view of how current advancement is going; and is very afraid of Deepmind. despite this belief, he is spearheading developments of self-driving cars, the neuralink chip. It is worth noting however, the more technology advances, the chance of a technological singularity increases which would completely change human civilization.
 
For the purposes of this discussion though, try to refrain from this "AI is basically just statistics" stuff unless you are willing to provide a meaningful argument.
I'm currently taking a few AI classes in College and treating it as just statistics is the only way I understand how it works. The only way I can visualize something like Machine Learning working is by thinking of it as the computer program getting a bunch of data and eventually learning how to do a task based on the overall structure of the data provided (iirc Machine Learning also involves statistical concepts like Linear Regression to make better judgements). That's also the only way I can imagine implementing it. I've only done one class on AI last semester and doing another class on it this semester, so maybe my perception and understanding of it will change from this framework.

Tangent aside, I created my first "advanced" AI for checkers last semester using Monte Carlo Tree Search. Basically, I had to make the program create a tree that checked for all possible states the board could be in and see if it was in a "winning" state (which in my case was either a draw or a victory for my AI). Before I implemented any heuristics, I thought the way it behaved was pretty interesting. There were a lot of points where my AI purposely sent pieces to die. However, by the end of the game, that didn't really matter since the AI would force an infinite, repeating sequence of events that would lead to a draw. Creating the heuristics for the AI was also pretty interesting. I had to do a lot of trial & error in adjusting how valuable the pieces were, which was surprising given that I expected it to not even have much of an effect. Also the project showed me how much faster C++ was than Python (iirc my friends were able to do 3x as many simulations in a 10 second time frame).

Working on the AI was pretty interesting, but I'm sure how much I want to focus on building Ai's like this in the future.
 

Mr. Uncompetitive

She had a habit of meeting all of the artists...
is a Contributor Alumnus
Hi I just started grad school with the intent on doing Vision research. I don't have an actual advisor/research project at the moment, or much to talk about myself, so I guess I'll just respond to stuff

Facebook released an extension of their Pytorch library that attempts to provide some useful functions for learning from 3d space or translating into 3d space. It doesn't work very well from what i've seen, but if you could create 3d environments from 2d images that would be a gamer changer, particularly for the 3d modelling industry.
There's already a ton of research on 3D Scene modeling from 2D images. It's not really my cup of tea in research interests so I can't say much about it other than knowing it's fairly popular, but if you wanna take a deep dive, you can search through some papers here

For the purposes of this discussion though, try to refrain from this "AI is basically just statistics" stuff unless you are willing to provide a meaningful argument.
AI is an deceptively broad field, but honestly statistics/data science is a consistently important factor, especially with pre-processing: When you're doing applications usually the bigger concern is your actual data than whatever Neural Net or ML algorithm you've decided to use, and you can always just import a model from Sklearn or use a pre-built Network like ResNet rather than spend the effort making stuff yourself. The hardcore statistics and Linear Algebra realllly comes into play when you're doing hardcore optimization or any other Math-heavy research into Machine Learning. From my understanding Neural Nets are a pain in the ass to optimize with just math (mostly because backpropogation is a bitch) though so Neural Net research is more about clever applications, different weighting functions, extensions on Neural Network frameworks...more just "building blocks" than truly trying to go for mathematical optimization. And then there's Reinforcement Learning (what Magcargo is doing), which is less Linear Algebra and moreso "Markov Chains on Cocaine".

Also the project showed me how much faster C++ was than Python (iirc my friends were able to do 3x as many simulations in a 10 second time frame).
C/C++ being faster is true in general, but Python's packages are just wayyyy easier to use imo. GPUs are a much easier way to get a speed boost, and if you really care low-level optimization (though I'd imagine most Python ML packages are well-optimized), then Python is built in C so you can make adjustments if you absolutely need to.


As a general response/aside, I've noticed that I don't really get too unsettled by robot uprisings or rogue AIs or any of that stuff lol (I'm moreso the type to get freaked out by genetic or anatomical research :blobastonished:)
 
Last edited:

Ryota Mitarai

Shrektimus Prime
is a Tiering Contributoris a Contributor to Smogonis a Top Smogon Media Contributor
(qualifications for this post: inspiring to be a web developer)

note: C++ is faster if you know what you are doing. Realistically speaking, C++ can be really slow if you don't know how to write it. But I guess this is true for any language, not just C++

Anyways, I think, while automatization will certainly take over the whole business world at some point, it really won't be that soon (e.g. within 5 years). AI and machine learning have developed and are developing tremendously, but based on my knowledge (and other people I know that are better versed in this topic than me), they are still not *that* advanced. I think it's fair to say that most of us won't be alive by the time AI starts robbing people off jobs massively.

AI has presence in many fields, most notably data science and sometimes marketing, but I can't really say I have seen it being prevelant (or even the industry standard) anywhere (other than maybe the aforementioned data science and marketing, but I don't do the former (so I wouldn't know) and the latter still has a lot of real people also doing that, depending on the type of marketing and the purposes for it).

However, the "This Person Does Not Exist" AI is definitely disturbing, mainly cause it assists many people in catfishing others on social media, but that's just my opinion. I honestly wonder if that should even exist myself, maybe that could be a good topic to discuss.
 

Yung Dramps

awesome gaming
I can't find the thread, but I remember a while ago I posted an idea somewhere on this site (probably somewhere in the late Firebot) about a hypothetical machine that would scan your brain to automatically draw whatever you imagined, removing the barrier for those with big imaginations but without the talent to bring them to life in illustrated form. When I typed that up, it was total science fiction to me, an ethereal concept centuries away.

I could've never imagined that dream device, or even something akin to it, was right around the corner.


This is the first of these AI projects to leave me truly floored and tingling with excitement. The implications of this are unreal and I myself probably haven't even scratched the surface of how this'll change creative work forever. Even the most artistically inept (myself included) will be able to craft picture books, detailed posters and graphic novels. If/when this thing is fine-tuned to recognize pop culture characters, fanfiction and fanart will enter a whole new era. The most riveting advertisements and trailers will be made by typing it up into a word document.
 
Last edited:

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top