This is for technical or philosophical discussion about what I think is one of the more important topics in our lifetime. That's saying a lot coming from me because I know this field is riddled with people who overhype or even fear monger about it. It is second only to cryptocurrency as the most annoying topic in software.
Probably the laziest way to define Artificial Intelligence is technology that can imitate the human mind. This is important because it means even a simple program like Akinator (which guesses what you are thinking) can be artificial intelligence, despite just being a decision tree. A program doesn't necessarily have to be dynamic or even very intelligent to be artificial intelligence. For the purposes of this discussion though, try to refrain from this "AI is basically just statistics" stuff unless you are willing to provide a meaningful argument.
It's important to note this thread is not just about machine learning or robotics. Those are sub-categories of artificial intelligence. You can really use this to bring up any interesting philosophical observation about AI, maybe some area you plan on studying, what you're working on, etc.
here are some starting topics:
1. 3D Machine Learning
Facebook released an extension of their Pytorch library that attempts to provide some useful functions for learning from 3d space or translating into 3d space. It doesn't work very well from what i've seen, but if you could create 3d environments from 2d images that would be a gamer changer, particularly for the 3d modelling industry.
2. Facial Recognition
May as well put this out there since it's like the textbook example of scary AI shit. I think it's fine to bring up political implications but don't use that as a chance to derail.
3. Neuralink
Elon Musk's stupid ass company that wants to put a microchip in your brain. Literally. It's like people's biggest fear about Bill Gates putting a chip in your brain except this time it's actually happening and everyone is cool with it. I claim this has some serious philosophical implications about AI.
Probably the laziest way to define Artificial Intelligence is technology that can imitate the human mind. This is important because it means even a simple program like Akinator (which guesses what you are thinking) can be artificial intelligence, despite just being a decision tree. A program doesn't necessarily have to be dynamic or even very intelligent to be artificial intelligence. For the purposes of this discussion though, try to refrain from this "AI is basically just statistics" stuff unless you are willing to provide a meaningful argument.
It's important to note this thread is not just about machine learning or robotics. Those are sub-categories of artificial intelligence. You can really use this to bring up any interesting philosophical observation about AI, maybe some area you plan on studying, what you're working on, etc.
here are some starting topics:
1. 3D Machine Learning
Facebook released an extension of their Pytorch library that attempts to provide some useful functions for learning from 3d space or translating into 3d space. It doesn't work very well from what i've seen, but if you could create 3d environments from 2d images that would be a gamer changer, particularly for the 3d modelling industry.
2. Facial Recognition
May as well put this out there since it's like the textbook example of scary AI shit. I think it's fine to bring up political implications but don't use that as a chance to derail.
3. Neuralink
Elon Musk's stupid ass company that wants to put a microchip in your brain. Literally. It's like people's biggest fear about Bill Gates putting a chip in your brain except this time it's actually happening and everyone is cool with it. I claim this has some serious philosophical implications about AI.