But isn’t that true even in the most basic travel chess computer from the 80s? Beyond the “opening database”, there never had been a database of all possible moves being referred to?
I understand that game algorithms (with the help of today’s processing speed) now have more resources to consider options than ever, and eventually “prioritize” them based on previously observed results, including more advantageous/disadvantageous positional patterns.
And that, given the underlying resources, they can identify a series of moves that would have escaped a person, is not particularly surprising?
Of course, that might lead to the question if intelligence (even human) is essentially just a randomization of thought, followed by filtering and prediction based on experience data and historically successful/rewarding thought patterns?
In which case, maybe it truly is “easier” to replicate than one might think?
Go is many orders of magnitude more complex than chess, in terms of possible moves. AlphaGo apparently only needs a complete set of rules for the game from which it can generate moves on its own, not unlike how most of us learn how to play a new game.
Reminds me of the uproar when calculators became widely available. Now, no one disputes using a calculator. Kids can’t do simple arithmetic as well as the older generation but they seem to be doing fine.
But nothing new? In the past, limited processing power made it desirable to refer to a supplied static database of a certain depth of opening moves and responses, because otherwise response time would have been too long.
But the capability of playing an entire game without a database had always been there - just not compatible with our patience?
Finally, if you reboot AlphaGo, and it starts “learning” the game, and it creates the “insights” it then retains to play future games - isn’t that essentially yet another database, except one that is dynamically created rather than “shipped” with the product?
I see the advancement in today’s gaming algorithms in the “Metadata” that now is stored in the “database” - similar to the concept of once being limited to read entire sequential files in a linear fashion, rather than now using SQL to relate complex sets of data, revealing facts that had previous escaped us.
Not joking - I think what I’m looking for is for ChatGPT to be told to “learn” Go - and it goes out and finds a PDF of the game’s pamphlet on the Hasbro site, maybe even understanding a few papers written on Go strategy, and then starts playing the game, incorporating all those insights.
As long as one needs a dedicated game software, I’m of course still impressed by the technological advancements of game software, but less inclined to compare it to human intelligence, or call it “intelligent”.
Actually, quite a bit “new”. AlphaGo was trained the way that most people assume all ML models are trained. It was fed massive quantities of human data (past games). It defeated Go master Lee Sedol in 4 out of 5 games in 2016. Lee retired in 2019 after stating that he could never be the top Go player describing the dominance of AI as “…an entity that cannot be defeated.”
The next generation called AlphaGo Zero was trained differently - it was provided with the rules of Go then it was allowed to learn on its own. While it took AlphaGo months to achieve a Go ELO rating of 3739, it took AlpaGo Zero 36 hours to achieve an ELO of 5185. Currently, the highest rated human holds an ELO of 3829. AlphaGo Zero makes moves that humans simply can’t understand. Interestingly, researchers tried training the AlphaGo Zero network using human games (like AlphaGo was trained) and they found, to their surprise, that AlphaGo Zero actually performed more poorly suggesting that it can learn better on its own – discovering new strategies that humans have not considered. To me, this a great example of the promise of various forms of machine learning.
When a human is born, it learns and creates “insights” stored in its brain to navigate life. Is that just another database? Those are the types of discussions that ML researchers delve into over cold beverages.
After a recent mammogram I arrived home and already had results. It had used “Computer aided detection.” I still don’t know if a human is also planning to look at the images.
First time I had seen that or had results immediately like that. I will admit I hope I will get confirming results from a human expert.
UCAS dropping personal statement requirement starting next application cycle. They cite the reason because affluent students have an advantage, but I wonder if chatgpt was also a factor? I also wonder if any US schools follow?
Ha, those real estate listing need ChatGPT. I spent a few years reading them (selling a property FSBO so wanted some models) - ugh! I acually offered to rewrite listings for some young guys staring out in the flip world - they took me up on it. I just felt bad for the poor writing and inability to properly market something that way.
My sister and I were not happy with the listing our dad’s realtor wrote for his house. Typos and inaccuracies. We asked her to change them and she didn’t. We ended up taking the house off the market for awhile, due to Dad’s poor health, and when we relisted it we went with another agent. The first one was upset and asked us to reimburse her part of her out-of-pocket costs, such as photography. Uh, no…