So I figured I'd give this a try -- not with the Tutor ChatGPT (maybe there's a different component to it) but with the regular ChatGPT -- for Vascular Dementia. Something to add when using the "concept explainer" portion is to use ChatGPT to chunk concepts for you, to have ChatGPT break down a topic into its most important parts, and to have ChatGPT create emotionally salient stories or cases to enhance memory retention.
When tackling Vascular Dementia, I started with asking it to break down the topic into the most relevant components one would need to consider in order to gain a suitable grasp of the concept (e.g., epidemiology, risk factors, suspected etiologies, clinical findings, etc), then had it expand on each topic, asking questions along the way to explore each topic. In order to create emotional salience to enhance memory retention, I had ChatGPT link to real-world events that would be relevant or interesting (e.g., famous individuals with Vascular Dementia and how their behavioural patterns could have resulted from some of their underlying pathologies).
Unfortunately, I found the "testing" function to be quire limited when trying to gain depth in a topic. Especially with multiple choice. The best test likely lies in the explanatory phase, when you ask ChatGPT to have you explain back a concept to it and have it check your reasoning. I wonder if ChatGPT could then figure out where you may have gone arwy with faulty reasoning and help you correct it, but that's just an afterthought.
Great questions. So what's your normal process and how does this compare? Off the bat, i agree , just re-explaining and then matching your explanation to a pre-selected best explanation is the easiest way to use baseline GPT for learning. In terms of chunking concepts, the GPT is bad at suggesting this because there are many ways you can sub categorize concepts. Chunking is non-linear- there are too many equally valid chunking methods, so it struggles to even access that as a baseline meta strategy because it has no way to rank valid strategies, so it just goes vertically down a teaching decision tree. instead of taking a creative liberty and running with any one strategy. that's my theory for an untrained GPT at least. you can make a second RAG (regression augmented generation) "decision matrix" which gives it the "freedom" to be creative with restrictions. This is sort of why it hallucinates- it doesnt have an internally consistent framework where it can A/B test its logic. To solve this we'd need to feed it that internal framework (ask Chris about this, he's directly building this for medicine, you can do same for cutting edge psych). Also iff you ask it to test you, easiest example is MC test, then it must generate question and ansers based on a restricted test first. then it can match the questions based on that internal standard. otherwise it will do its GPT "next best average answer" thing and will give crazy answers.
so what's actionable from all this - if you liked the method pacing, then to improve questioning, generate questions and feed it the ideal sets of answers as a very strict grading standard. I can teach you how if you'd like. I've done this for essays, MC, and poster presentations. for example, id grade presence of a concept as a 0 or 1.
I appreciate the piece, Daren! I'm so glad the Tutor was useful for you. I've personally used it to learn more about flow cytometry, x-rays, tinnitus, lexapro's mode of action, and some other things recently. I also suspect it might be useful for people who want to learn about things that they don't feel comfortable asking an actual human about.
I do think a pre-test/post-test function is a good idea. Should be easy to implement, given how dead-simple GPTs are to create.
Hi David, thank you so much for your comment and follow It means a lot to me. I used GPT for my PhD defense a while back and found it to be the best learning method. I agree for the pre-post, it would be amazing for test grading as you mentioned elsewhere. However, when it comes to long-form questions, you'd need to constrain it so that answers and questions are objective.
In my opinion, most people using GPT don't benefit from self-prompting or second-order thinking, and therefore are unaware of their knowledge gaps. But good GPT layers like Tutor prompt the user and make the experience more engaging and lifelike. I hope this becomes the default modality for GPT 5 or any new GPT. Please let me know if there are any specific GPTs you would like me to take a look at. Best, D.
OK I kind of have to admit that this is a bit over my head, but has peaked my interest. (I just received my used hardback of the complete writings of Spinoza, lol, so my brain is also pre-scrambled.)
Your presentation of the subject seems quite professional, and I intend to revisit it! I’m in an exploring mood... thank you!
Yea that's a great topic. I haven't read him but he is definitely on my read list. What I would do is plug in whatever PDF or copy of spinoza you can find (I can private message you some PDF websites) and then ask the GPT to prompt you questions back and forth. Its a great way to engage in challenging content and refresh your memory if you want an alternative way to engage with material
That would be great. I'm on my second day with the book, and though I usually skip the introduction, I found it engaging, so I've yet to tackle Spinoza head on. I'm old school, and will take my time with it (researching words I'm not familiar with, etc.) I can read a seven word sentence by Emerson and then contemplate it on my porch for several days. So send the PDF's, but I will probably not engage with GPT right away. I don't even think I have GPT. And since most original songwriters are broke (I'm optimistic in this regard) I, at the moment may not be able to pay for anything new. I have an idea I'd like to float by you, but prefer to correspond by email rather than private messaging.
So I figured I'd give this a try -- not with the Tutor ChatGPT (maybe there's a different component to it) but with the regular ChatGPT -- for Vascular Dementia. Something to add when using the "concept explainer" portion is to use ChatGPT to chunk concepts for you, to have ChatGPT break down a topic into its most important parts, and to have ChatGPT create emotionally salient stories or cases to enhance memory retention.
When tackling Vascular Dementia, I started with asking it to break down the topic into the most relevant components one would need to consider in order to gain a suitable grasp of the concept (e.g., epidemiology, risk factors, suspected etiologies, clinical findings, etc), then had it expand on each topic, asking questions along the way to explore each topic. In order to create emotional salience to enhance memory retention, I had ChatGPT link to real-world events that would be relevant or interesting (e.g., famous individuals with Vascular Dementia and how their behavioural patterns could have resulted from some of their underlying pathologies).
Unfortunately, I found the "testing" function to be quire limited when trying to gain depth in a topic. Especially with multiple choice. The best test likely lies in the explanatory phase, when you ask ChatGPT to have you explain back a concept to it and have it check your reasoning. I wonder if ChatGPT could then figure out where you may have gone arwy with faulty reasoning and help you correct it, but that's just an afterthought.
Great questions. So what's your normal process and how does this compare? Off the bat, i agree , just re-explaining and then matching your explanation to a pre-selected best explanation is the easiest way to use baseline GPT for learning. In terms of chunking concepts, the GPT is bad at suggesting this because there are many ways you can sub categorize concepts. Chunking is non-linear- there are too many equally valid chunking methods, so it struggles to even access that as a baseline meta strategy because it has no way to rank valid strategies, so it just goes vertically down a teaching decision tree. instead of taking a creative liberty and running with any one strategy. that's my theory for an untrained GPT at least. you can make a second RAG (regression augmented generation) "decision matrix" which gives it the "freedom" to be creative with restrictions. This is sort of why it hallucinates- it doesnt have an internally consistent framework where it can A/B test its logic. To solve this we'd need to feed it that internal framework (ask Chris about this, he's directly building this for medicine, you can do same for cutting edge psych). Also iff you ask it to test you, easiest example is MC test, then it must generate question and ansers based on a restricted test first. then it can match the questions based on that internal standard. otherwise it will do its GPT "next best average answer" thing and will give crazy answers.
so what's actionable from all this - if you liked the method pacing, then to improve questioning, generate questions and feed it the ideal sets of answers as a very strict grading standard. I can teach you how if you'd like. I've done this for essays, MC, and poster presentations. for example, id grade presence of a concept as a 0 or 1.
Thanks for reading and trying!!!!
I appreciate the piece, Daren! I'm so glad the Tutor was useful for you. I've personally used it to learn more about flow cytometry, x-rays, tinnitus, lexapro's mode of action, and some other things recently. I also suspect it might be useful for people who want to learn about things that they don't feel comfortable asking an actual human about.
I do think a pre-test/post-test function is a good idea. Should be easy to implement, given how dead-simple GPTs are to create.
Be well, and thanks again!
Hi David, thank you so much for your comment and follow It means a lot to me. I used GPT for my PhD defense a while back and found it to be the best learning method. I agree for the pre-post, it would be amazing for test grading as you mentioned elsewhere. However, when it comes to long-form questions, you'd need to constrain it so that answers and questions are objective.
In my opinion, most people using GPT don't benefit from self-prompting or second-order thinking, and therefore are unaware of their knowledge gaps. But good GPT layers like Tutor prompt the user and make the experience more engaging and lifelike. I hope this becomes the default modality for GPT 5 or any new GPT. Please let me know if there are any specific GPTs you would like me to take a look at. Best, D.
OK I kind of have to admit that this is a bit over my head, but has peaked my interest. (I just received my used hardback of the complete writings of Spinoza, lol, so my brain is also pre-scrambled.)
Your presentation of the subject seems quite professional, and I intend to revisit it! I’m in an exploring mood... thank you!
Yea that's a great topic. I haven't read him but he is definitely on my read list. What I would do is plug in whatever PDF or copy of spinoza you can find (I can private message you some PDF websites) and then ask the GPT to prompt you questions back and forth. Its a great way to engage in challenging content and refresh your memory if you want an alternative way to engage with material
That would be great. I'm on my second day with the book, and though I usually skip the introduction, I found it engaging, so I've yet to tackle Spinoza head on. I'm old school, and will take my time with it (researching words I'm not familiar with, etc.) I can read a seven word sentence by Emerson and then contemplate it on my porch for several days. So send the PDF's, but I will probably not engage with GPT right away. I don't even think I have GPT. And since most original songwriters are broke (I'm optimistic in this regard) I, at the moment may not be able to pay for anything new. I have an idea I'd like to float by you, but prefer to correspond by email rather than private messaging.