Quick notes on Language Models
A study submitted in July 2023 reported that GPT is getting “dumber”, with decreased performance on math calculus and visual reasoning. This caused a bit fuzz on the internet, but anyway, I’d like to make some notes here. An important note about theses notes (jokes and puns aside): they all come from head and from my experience and world knowledge. I did not consult any reference to make them (except for the links, of course), so be aware of this. Here are the notes: GPTs and Language Models based on Transformers learn from text data (as you may know) and represent this acquired knowledge as ontologies (roughly speaking, a network of interconnected concepts). Thus, prompting a GPT or any other language model is basically “querying an ontology”, of course, in a very smart and practical way. Hence, a Language Model actually does not do calculus per se, but queries the “ontology” to try to figure the result. Hence (again), we shouldn’t except that much of GPT in the “math side of the ...