How can AI help shorten wait times and eliminate prejudice in the healthcare system?
Grade 9
Presentation
No video provided
Problem
How can AI help shorten wait times and eliminate prejudice in the healthcare system? It's no secret that the past few years have put considerable strain on the healthcare system. This has led to waiting hours in the emergency room which can be the make or break for undetected life-threatening diseases or injuries. Using AI, we do part of the diagnoses beforehand and potentially save many lives in the process! Humanity will always and forever be a heavily biased and prejudiced species. It touches every aspect of our lives from the color of shirt you choose to which person you might select for a job interview. This is no different for healthcare workers. In Canada alone there have been many cases where people were refused care for racial or gender-based reasons. Using AI, we can make a safer and dependable system for all!
Method
1. Make my question as precise as possible
2. Discuss with my teacher my plans for my science project
3. Do some brief research to get a general idea of my topic
4. Talk to friends or family who might be knowledgeable in the subject
5. Once I have a brief understanding of my project start real research (Look at websites, videos, magazines)
6. Write down all of my research (In logbook and on docs)
7. Review my sources to make sure there accurate + write my sources out
8. Write down my research in CYSF website
9. Edit all my writing
10. Review with my work with my teacher
11. Finalize everything for the due date
12. After due date obtain tri-fold
13. Write the things that will go on my board
14. Assemble my board
15. Practice my presentation (In front of family friends and teachers)
16. Make sure everything is in order and go set up at the science fair
17. Go to the science fair and present :)
My idea is to have many little cubbies in the hospital waiting room where patients can go talk to an LLM. They will input their symptoms and unless race and gender was necessary to the diagnoses then it wouldn't ask. The symptoms would then be sent to a nurse or doctor who would then get an unbiased view of the symptoms. Having multiple cubbies would also speed up the wait times. The LLM could also help doctors organise paperwork so they can sleep more and take care of themselves.
Research
Large Language Models
A large language model is a type of generative AI that can learn and improve with the help of large datasets. It can be taught to predict and generate new information. LLM can also be referred to as neural networks as they were inspired by the human brain! Much like the human brain it takes time for these LLMs to learn and start to actually perform the task at hand. It takes a lot of data and effort to train these LLMs. So how do these large language models work?
Transformers
To input data to start training a LLM we will most commonly use a transformer model which simply translates our datasets into a language which the computer will understand. We do this so that the computer can start to see patterns and wrap its "head" around the data given. Transformer models aren't the only thing at work here! Transformer models also work hand in hand with self-attention mechanisms which enable the LLM to learn in a quicker and more efficient manner. It helps the LLM observe the entire phrase and observe context and syntax, which contrary to its older counterparts which would look at individual words in sequential manner without considering context, makes it much more efficient and accurate. Now this is just a small part of the technology behind LLM! Here are key components that contribute to an LLM!
The Embedding Layer
Using embeddings, we can turn words and sentences into numbers that a computer will understand. A vector is a way to represent numerically the word at hand. We use these so that the LLM which we are training can understand grammar and the meaning of words. What's more, it also helps the computer recognize patterns. There are many different types of embeddings, the most common being word embeddings, which represents each word as an individual and unique number. This is necessary in machine learning because it helps the LLM understand our languages in a more profound manner. It helps them put sentences into context and understand the difference between synonyms in different contexts. To add on to this, it helps the computer understand grammar and how it can affect your sentence. The more embeddings the better, but this can be very costly and take lots of time so what many companies use are multi-head embeddings. Multi-head embeddings are embeddings that are the product of modeling and altering other embeddings. These embeddings help the model create the best interpretation possible by running the same words over and over again. They are then given a score, interpretations with higher scores are given more value and are taken into consideration more. The embedding interpretations are combined to create an even better embedding! Embeddings group similar words with similar numbers, but this can be a problem when the same word has different meanings. The computer doesn't know the difference between the two. This is why there are so many more layers necessary for a LLM to work smoothly!
The Feedforward Layers
The feedforward layer is made up of multiple different layers that are all connected to each other, these layers transform the input embeddings. These different layers allow the model to predict the next word. We can assign specific scores to each word so, then when put in a sentence the model can associate these scores with a certain thing and decide which one it is. For example, we can use positive and negative. If the word is positive like good, then it has a higher score. If the word is negative like bad it will have a lower score. If the word is neutral then it will have no score. If we put in a sentence like "Carrots are good" the model will know that this sentence is positive because it has a higher score. If the sentence is "Carrots are bad" it will have a lower score. Then the model will know that the sentence is bad. This is called sentiment training. You can use this type of training to reinforce or teach your model almost anything. This training helps the model predict the next word in the sentence and can also help the model gain a deeper understanding of what the person is trying to say!
The Recurrent Layer
The recurrent neural network is well recurrent. It is almost the same as a feedforward network just with an extra step. Instead of information just going straight through, instead it has an extra step that loops information back through the neural network. This gives the recurrent layer sequential memory. There can be a problem with this, over time as the recurrent network feeds information in that loop the recurrent network's memory of the first part of the sequence becomes smaller and smaller. Even so the recurrent network is still a very important part of an LLM. The recurrent neural network can then forward information to the feedforward network where it is then processed and then a prediction is made. This layer helps the model observe the relationships between the words. It helps the model observe and improve its grammar and sentence structure.
The Attention Mechanism
The attention mechanism is what allows the model to focus on individual parts of text and it helps the model find keywords that are most relevant to the task at hand. It puts similar words into context by looking at the numbers assigned by the embeddings. The closer the numbers are together the more similar the words will be. In a sentence it can be seen that these two words are similar so they must be related and therefore this is the meaning of this word. Self-attention can also help the model with its grammar, it can see what it is saying in accordance with the input text. It can pay attention to its actions and what it is doing.
Training
LLM are trained using large (Hence the name large language models) amounts of data. They use hundreds if not thousands of text sets that contain thousands of words. We feed all these enormous datasets to the LLM. This way the LLM can start to learn, it learns sentence structure and language. It will also learn context with the help of the feedforward and recurrent networks, it can learn to distinguish good from bad, hot from cold or really whatever you want it to. One of the many advantages of this type of training is it doesn't require as much human surveillance.
Fine-Tuning + Prompt-Tuning
If you want a LLM to perform a specific task then you have to be trained to that specific task. To do this then you have to create a specific dataset and start feeding it prompts from the dataset, where it is the put through the LLM over and over again, when it gets the answer wrong the LLM adjusts itself, it does this over and over again until it reaches a consistent answer. An example of this would be if we asked the LLM "What colour is the sky?", the first time it might say "red", this answer is obviously wrong so the LLM will adjust itself and the next time it might correctly say "blue". This is all down by itself. After the LLM has trained itself we then ourselves start to ask a model questions, this is the second phase, human supervised training. The model learns how to respond to human text, and specific orders. This improves its overall performance and makes it far more useful for human use!
Multimodal AI
Multimodal AI is a type of AI that can take multiple forms of data to create more precise and accurate predictions. Multimodal models take the next step and don't just draw on existing information, but are able to go further and create new data in the form of images, audio, text or numbers. Because multimodal AI can process many different types of data, it has access to so much more information. This helps it make much more developed answers and predictions! Not only this but a multimodal model will loop its answer and the user’s satisfaction back into the model so that it can form even more precise answers. This way the multimodal model can more closely imitate the human thought process.
Input Module
The input model is a series of neural networks whose job is to process all of the audio, text, and image data. Generally, companies use different neural networks for each type of data.
Fusion Module
This model is responsible for putting everything together. It will fuse all the important or relevant data together, into even larger datasets, that will use the strengths of each type of input.
Output Module
This is the final product, this is the prediction, answer or recommendation the multimodal model has formed.
Natural Language Processing (NLP)
This is how the multimodal model understands language. It helps the model pick up on sarcasm, or sentences with double meanings. It does this using many different tools. First it uses tokenization, which essentially breaks the sentence into individual parts, so in the sentence "Mia was running very fast". It would break it down into individual parts, so Mia would be one token. Then we use stemming and lemmatization to associate different words together and still understand what the individual word means. All this helps the multimodal model process audio and text.
Computer Vision
This is the mechanism that helps the multimodal model process images. It does this through lots and lots of training! We train it by using labeled images. It will compare the images to the labeled images, then when we feed it images, it will take a guess. At first the predictions will make no sense, but as it goes on the model becomes more and more accurate.
Text Analysis
This helps the model understand written text and the intent behind it.
Integration System
This filters out unnecessary information and helps align and prioritize the many datasets. This is important for the multimodal model to understand context.
Racism and Prejudice in the Healthcare System
Canada is known around the world for being one of the most accepting countries. Even so racism and discrimination are still ever present in our society. There have been many accounts of being denied service over racial or gender stereotypes. There have been many cases around the country of people of African or Indigenous descent being denied care, under the pretense that they were faking it so that they could obtain narcotics. It doesn't even have to be full blown denial. Racism is also found in small and subtle dismissive actions, whether having your opinions brushed off by healthcare personnel or just simply being treated as inferior. This not only contributes to less care for people of minorities, but can also contribute to a fear around going to the doctors. When you are treated as inferior and dismissed when you’re simply going for a regular physical at a clinic, it doesn't exactly make one want to return. There have been many notable cases over the past few years such as Joyce Echaquan, who filmed herself as she was strapped to a bed while she was pleading for help. The hospital staff appeared to be cold and indifferent whilst they hurled racist and rude insults. She was left sedated and when a nurse came to check on her a few hours later she was found dead on her hospital bed. This video she took raises so many issues and demonstrates that Canada is far from perfect in this regard. This is just one case out of many such as John Rivers, who after receiving a spinal tap test had many side effects but had to wait sixty days before receiving any medical attention. He was accused of faking his symptoms to be able to access narcotics. Carol McFadden an Indigenous woman was told she shouldn't have come for an examination, she then put off having a breast screening for many years, she then discovered that she had stage four breast cancer. These cases highlight that Canada has a systemic issue with racism within our healthcare system. This needs to change immediately. We need to make Canada's healthcare system a place where everyone feels safe and doesn't have to fear discrimination or malpractice in their annual physical. We need to strive together to create a better world for everyone.
Present Day AI Developments in Healthcare
AI in healthcare doesn't just stop there, it can also help the developments of new drugs! AI has helped in the development of the Pfizer covid vaccine by processing information that would have taken them a month in a mere 22 hours! It can also help reconstruct and improve the quality of X-Rays and MRI scans! This makes it easier for doctors to identify small changes that would not have been seen in a lower quality scan. Not only does it help by reconstructing the image it will also lower the amount of time the patient has to spend in the MRI scanner. This can be advantageous for people who suffer from claustrophobia and high anxiety. AI has helped analyze x-rays where it will examine x-rays and determine if it is normal. If its flagged as abnormal it will be sent out to a doctor. The AI has done this all by itself, this shows that AI can accurately predict and analyze images.
Med-paLM 2 is the first AI tool to pass the medical licensing test. At first paLM passed with a score of 67% but just an astounding three months later it passed again with 85%. This is an incredible innovation because it shows that AI can accurately predict and diagnose humans just like a human. It proves that AI can actually help, in the medical field and that if we put aside pre-existing ideas, AI can help us revolutionize our healthcare system for the better.
It can help you organize medical records and help you ask questions about your health to make sure you get the best care possible. What's more for humans it's basically impossible to memorize every medical condition or disease out there but not for AI, it can help doctors make more educated decisions by asking an LLM instead of having to scroll through different articles from Google. There are also many different apps that are already helping many people in their day to day lives. Think of every fitness, sleep monitoring, heart rate and blood pressure app out they all have AI as one of the key components. It has so many uses, which highlights the importance for widespread implementation of AI in the healthcare system!
Problems
There can be many problems associated with AI, as there is with any new advancements. Some of these problems can have to do with the AI itself, it can over diagnose or "hallucinate" where it will basically make things up, or get confused, be insufficient training data or incorrect assumptions, AI isn't perfect. Even if this is true, we have made astounding developments in the past century and I am sure that we will make even more developments in the next few years. Another problem is human skepticism, there is a general distrust of AI. Even if we have all the technology, it will take many years for AI to be accepted by humans. This is not to say we should put our complete trust in AI and technology. We still need to take everything AI says with a grain of salt because while AI is good it is not always 100% accurate.
Data
How can AI help improve our healthcare system? AI can help our healthcare system by shortening wait times, eliminating prejudice and creating a faster more reliable form of diagnoses! There are many different ways we can accomplish this, one of the most evident being a large language model (LLM) or multimodal AI models. If we have something like this on multiple different computers around the waiting room, we could help doctors and nurses save thousands of lives!
Instead of going to the front desk and having to wait for hours just to see a nurse. You would put your symptoms into a computer which has been trained with enormous and diverse datasets so that you can receive a general diagnosis if it was life threatening it would be flagged as such, it would then be forwarded to a doctor so they could then take an unbiased look at your symptoms. This not only would help eliminate prejudice but would also help speed up wait times for potentially fatal illnesses or injuries. Aswell having multiple desks and computers to put in symptoms would speed up the wait times incredibly. Not only does this help patients but it takes a heavy load off of nurses and receptionists, who work hard everyday to make sure we get the care we need. They have to process hundreds of patients everyday and keep track of every single one of those patients. Having to do so much work like this, someone is bound to make a mistake. Which is why if AI and healthcare workers work together, we can create a better, more reliable system for all!
Every year hundreds of patients are subject to the bias and prejudice of social workers. Even if done unintentionally we cannot deny that humans are heavily biased beings. We make unconscious assumptions about the people around us. This is why if we use AI that has been trained on hundreds of unbiased datasets, we can make sure doctors get an unprejudiced view of a patient's symptoms. When a patient walks into a booth with a computer in it. The AI will ask their symptoms and unless race or gender is necessary for the diagnoses the AI will simply not ask. The information will be then sent to a qualified nurse or doctor who will get an unbiased look at the patient's symptoms. They can then make an early diagnosis without seeing the patient, they will eventually see the patient to make adjustments to the early diagnoses. They can use what the patient put into the computer as a reference and this can even help doctors recognize their own bias so that they can start moving away from prejudice. In a world where everyone has their own thoughts and opinions, AI can be a valuable tool to help eliminate that bias and make a safer, more equal world for all.
Humans are bound to make mistakes after all we are only, well human! When you're working long shifts and late hours, it is almost certain you will under diagnose or over diagnose a patient. AI, while still capable of making mistakes, is not affected by tiredness. It can constantly make informed decisions without being affected by its environment. Not only that but AI can help doctors cut back on their working hours. While doctors only are at the hospital for a certain number of hours, they still have to do paperwork and organize information. AI can help take away this burden, by organizing and filling out paperwork for you. So doctors and nurses can get their well-deserved sleep! Of course, a human must check it. After all AI is not perfect. Even so Ai is an extraordinary asset for doctors and nurses everywhere!
AI is one of humanity's greatest and most fast-growing innovations to this day. We can use this incredible mechanism that will help make our health care systems a better place for all, by shortening wait times, eliminating prejudice and by helping out doctors make more accurate diagnoses. AI is the future of humanity whether we use it for better or worse that is entirely up to humanity. Even so, AI is a valuable tool to help make our world a better place for all Canadian citizens!
Conclusion
AI can help our healthcare system in many different ways, whether it's shortening wait times, stopping bias, or helping doctors and nurses, it is the future of humanity. A valuable partner, who can help us in many aspects of our lives. If we are able to implement a system in hospitals we help people of all walks of life. We can make way for a system that will help doctors become more unbiased, a system where people of minorities don't have to fear seeking medical aid. We can create a system where undetected life-threatening injuries don't go undetected. A system where doctors can focus more on not just their patients’ health but their own. AI can help humanity in many different ways and it is up to us to make a brighter future for all!
Citations
Artificial intelligence and healthcare: What's the link? Costco Connection Magazine, March 2024.
Joseph JW, Kennedy M, Landry AM, et al. Race and Ethnicity and Primary Language in Emergency Department Triage. JAMA Netw Open. 2023;6(10):e2337557. doi:10.1001/jamanetworkopen.2023.37557 (https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2810580#:~:text=Several%20studies%20have%20documented%20major,to%20their%20race%20and%20ethnicity.&text=Specifically%2C%20Black%20and%20Hispanic%20patients,symptoms%2C%20such%20as%20abdominal%20pain)
Haggins Adrianne, MD, Bias in the emergency department, Feb. 17, 2022, AAMC
https://www.aamc.org/news/bias-emergency-department
Christian Angelo I. Ventura, Edward E. Denton, Benjamin R. Asack,:Implications of systemic racism in emergency medical services: On prehospital bias and complicity. https://doi.org/10.1016/j.eclinm.2022.101525
https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(22)00255-3/fulltext
Brenda L Gunn, Associate Professor, Robson Hall Faculty of Law, University of Manitoba, Canada. Submission to EMRIP the Study on Health: “Ignored to Death: Systemic Racism in the Canadian Healthcare System.” https://www.ohchr.org/sites/default/files/Documents/Issues/IPeoples/EMRIP/Health/UniversityManitoba.pdf
Boyer Y. Healing racism in Canadian health care. CMAJ. 2017 Nov 20;189(46):E1408-E1409. doi: 10.1503/cmaj.171234. PMID: 29158453; PMCID: PMC5698028. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5698028/
Indigenous Services Canada. Government of Canada actions to address anti-Indigenous racism in health systems. https://www.sac-isc.gc.ca/eng/1611863352025/1611863375715
Leyland, Andrew; Smylie, Janet; Cole, Madeleine; Kitty, Darlene; Crowshoe, Lindsay; McKinney, Veronica; Green, Michael; Funnell, Sarah; Brascoupé, Simon; Dallaire, Joanne; Safarov, Artem (Principal Authors). Health and Health Care Implications of Systemic Racism on Indigenous Peoples in Canada. Prepared by the Indigenous Health Working Group of the College of Family Physicians of Canada and Indigenous Physicians Association of Canada. https://www.cfpc.ca/CFPC/media/Resources/Indigenous-Health/SystemicRacism_ENG.pdf
Canadian Medical Association. Challenging anti-Indigenous racism in health care. Accessed March 14, 2024. https://www.cma.ca/latest-stories/challenging-anti-indigenous-racism-health-care.
Phillips-Beck W, Eni R, Lavoie JG, Avery Kinew K, Kyoon Achan G, Katz A. Confronting Racism within the Canadian Healthcare System: Systemic Exclusion of First Nations from Quality and Consistent Care. Int J Environ Res Public Health. 2020 Nov 11;17(22):8343. doi: 10.3390/ijerph17228343. PMID: 33187304; PMCID: PMC7697016. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7697016/)
Canadian Nurses Association. Racism in Healthcare.
https://www.cna-aiic.ca/en/policy-advocacy/advocacy-priorities/racism-in-health-care
Elastic, What is a Large Language model?
https://www.elastic.co/what-is/large-language-models
Hugging Face, How do Transformers work?
https://huggingface.co/learn/nlp-course/en/chapter1/4
Google Cloud tech. Transformers, explained: Understand the model behind GPT, BERT, and T5 (Aug 18, 2021). Last accessed March 15, 2024.
https://www.youtube.com/watch?v=SZorAJ4I-sA
Kniberg, Henri. Generative AI in a nutshell - how to survive and thrive in the age of AI (Jan 20, 2024). Last accessed March 15, 2024.
https://www.youtube.com/watch?v=2IK3DFHRFfw
Karpathey Andrej. [1hr Talk] Intro to Large Language Models. (Nov 22, 2023). Last accessed March 15, 2024.
https://www.youtube.com/watch?v=zjkBMFhNj_g
Datacamp. Hyperparameter Optimization in Machine Learning Models. (Aug, 2018). Last accessed March 15, 2024.
https://www.datacamp.com/tutorial/parameter-optimization-machine-learning-models
Mirza Fahd. What is Parameter in Model - Simple Explanation with Example. (Aug 2, 2023). Last accessed March 15, 2024.
https://www.youtube.com/watch?v=2wAOKIMJ9mI
Talebi Shaw. A Practical Introduction to Large Language Models (LLMs). (Jul 22, 2023). Last accessed March 15, 2024.
https://www.youtube.com/watch?v=tFHeUSJAYbE
Pfizer. How a Novel ‘Incubation Sandbox’ Helped Speed Up Data Analysis in Pfizer’s COVID-19 Vaccine Trial. Last accessed on March 15, 2024.
Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering (Basel). 2023 Apr 20;10(4):492. doi: 10.3390/bioengineering10040492. PMID: 37106679; PMCID: PMC10135995.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10135995/
Forbes. How AI Can Help Make A Better Covid Vaccine. (Aug 27, 2024). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=vTCSYIEnSJ8
Click on Detroit, Local 4, WDIV. How Artificial Intelligence is improving MRI scans. (Dec 26, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=KH-7A1RTn7Y
Wadhwani Institute of of Technology and Policy. Case Study | Speeding Up Vaccine Development using AI, Deep Learning and Analytics | Healthcare | (Nov 8, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=ZvNvgkD_myI
Wion. Scientists can now use AI to convert brain scans into words. (May 2, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=yME0KuI1s-Q
RadNet. Artificial Intelligence & Faster, More Reliable MRI Scans. (Sep 7, 2021). Last accessed on march 15, 2024.
https://www.youtube.com/watch?v=VA6buYNyr0s
Wikipedia contributors. (March 14, 2024). Large language model. In Wikipedia, The Free Encyclopedia. Retrieved 02:40, March 16, 2024, from https://en.wikipedia.org/w/index.php?title=Large_language_model&oldid=1213683917
https://www.appypie.com/blog/architecture-and-components-of-llms
https://lakefs.io/blog/llmops/
Elastic. What are vector embeddings? Last accessed on March 15, 2024.
https://www.elastic.co/what-is/vector-embedding
Serrano.Academy. The Attention Mechanism in Large Language Models. (Jul 25, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=OxCpWwDCDFQ
Serrano.Academy. What are Transformer Models and how do they work? (Nov 2, 2023). Last accessed on march 15, 2024.
https://www.youtube.com/watch?v=qaWMOYf4ri8
IBM Technology. Neural Networks Explained in 5 minutes. (May 24, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=jmmW0F0biz0
Serrano, Luis. What Are Transformer Models and How Do They Work? (Apr 12, 2023). Last accessed on March 15, 2024.
https://txt.cohere.com/what-are-transformer-models/
Serrano. Academy. A friendly introduction to Recurrent Neural Networks. (Aug 18, 2017) Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=UNmqTiOnRfg
MIsra Turp. Fool-proof RNN explanation What are RNNs, how do they work? (March 20, 2022) Last accessed on March 15 2024.
https://www.youtube.com/watch?v=y9PLF2GsD-c
https://www.telusinternational.com/insights/ai-data/article/difference-between-cnn-and-rnn
StatQuest with Josh Starmer. Recurrent Neural Networks (RNNs), Clearly Explained!!! (Jul 10, 2022). Last accessed on March 15 2024.
https://www.youtube.com/watch?v=AsNTP8Kwu80
The AI Hacker. Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition. (Aug 25, 2018). Last accessed on March 15 2024.
https://www.youtube.com/watch?v=LHXXI4-IEns
Google for Developers. What are Large Language Models (LLMs)? (May 5, 2023). Last accessed on March 15 2024.
https://www.youtube.com/watch?v=iR2O2GPbB0E
Google Cloud Tech. Introduction to large language models. (May 8, 2023). Last accessed on March 15 2023.
https://www.youtube.com/watch?v=zizonToFXDs
Supperannote. Fine-tuning large language models (LLMs) in 2024. (February 5, 2024). Last accessed on March 15, 2024.
https://www.superannotate.com/blog/llm-fine-tuning
Snorkel, Bach, Stephen. Large language model training: how three training phases shape LLMs. (February 27, 2024). Last accessed on March 15 2024.
Assembly AI. How do Multimodal AI models work? Simple explanation. (Dec 5, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=WkoytlA3MoQ
Neural Breakdown with AVB. Multimodal AI from First Principles - Neural Nets that can see, hear, AND write. (May 27, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=-llkMpNH160
https://www.kdnuggets.com/2023/03/multimodal-models-explained.html
Lawton, George. multimodal AI. Last accessed on March 15, 2024.
https://www.techtarget.com/searchenterpriseai/definition/multimodal-AI
https://www.splunk.com/en_us/blog/learn/multimodal-ai.html#:~:text=Multimodal%20AI%20gives%20users%20the,produce%20both%20text%20and%20images.
Meta. Multimodal generative AI systems. (Dec 12, 2023). Last accessed on March 15, 2024.
https://ai.meta.com/tools/system-cards/multimodal-generative-ai-systems/
AIMESOFT. Introduction to multimodal AI. Last accessed on March 15, 2024.
https://www.aimesoft.com/multimodalai.html
Wikipedia contributors. (2024, March 10). Multimodal learning. In Wikipedia, The Free Encyclopedia. Retrieved 03:27, March 16, 2024, from https://en.wikipedia.org/w/index.php?title=Multimodal_learning&oldid=1213025655
By the Pecan Team. What is Multimodal AI? Combining Tools for Business Impact. (Dec 28, 2023). Last accessed on March 15 2024.
https://www.pecan.ai/blog/what-is-multimodal-ai-business/
Rouse, Margaret. Multimodal AI (Multimodal Artificial Intelligence). (Jul 4, 2023). Last accessed on March 15 2024.
https://www.techopedia.com/definition/multimodal-ai-multimodal-artificial-intelligence
IBM Technology. What is NLP (Natural Language Processing)? (Aug 11, 2021). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=fLvJ8VdHLA0
Simplilearn. Natural Language Processing In 5 Minutes | What Is NLP And How Does It Work? | Simplilearn. (March 17, 2021). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=CMrHM8a3hqw
Google Cloud Tech. How Computer Vision Works. (Apr 19, 2018). Last accessed on March 15 2024.
https://www.youtube.com/watch?v=OcycT1Jwsns
Global News. Systemic racism rampant in Alberta’s health-care system, study shows. (Jan 19, 2022). Last accessed on March 15 2024.
https://www.youtube.com/watch?v=CZcq_j8f7fw
CBC. Racism in health-care system 'pervasive': study. (9 years ago). Last accessed on March 15 2024.
https://www.cbc.ca/player/play/2651743690
Reid Rogene, CBC. When you are Black, elderly and a woman, health care discrimination is a triple whammy. (Nov 3, 2022). Last accessed on March 15 2024.
https://www.cbc.ca/news/opinion/opinion-health-care-discrimination-rogene-reid-1.6607676
Sterritt Angela, Lindsay Bethany CBC. Nurse says she won't rest until Indigenous patients 'actually feel safe' seeking health care. (Sep 28, 2023). Last accessed on March 15, 2024.
The Canadian Press. Canadian medical journal acknowledges its role in perpetuating anti-Black racism in health care. (Oct 24, 2022). Last accessed on March 15 2024.
https://www.cbc.ca/news/health/cmaj-anti-racism-1.6627312
Bains Camille, The Canadian Press,CBC. Anti-racism policies in health care should be led by Indigenous staff: report. (Apr 04, 2023). Last accessed on March 15, 2024.
https://www.cbc.ca/news/health/antiracism-health-care-canada-indigenous-1.6801412
Basu Brishti. Barriers like racism, distrust may be main cause of health-care disparities for Indigenous women, study says. (Aug 28, 2023). Last accessed on March 15, 2024.
https://www.cbc.ca/news/health/indigenous-women-health-care-inequities-1.6949274
CBC. An Indigenous doctor believes that institutionalized racism exists in the healthcare system. (March 3, 2024). Last accessed on March 15, 2024.
https://www.cbc.ca/player/play/2314479683549
CBC News: The National. Tackling systemic racism in Canada’s health-care system. (Oct 16, 2020). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=MVdKURnP6_Y
Breakfast Television. What medical racism is — and how it exists in Canada right now. (Feb 8, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=3U6lyjK5EIw
CityNews. Racial bias in Canadian healthcare. (Dec 4, 2019). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=40V43Iz0QCM
Global News. Joyce Echaquan's family calls for justice after her death in Quebec hospital. (Sep 30, 2020). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=9crl-lcZOEk
Global News. Alleged treatment of dying Indigenous woman in Quebec hospital sparks outrage. (Sep 29 2020). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=jXpBkQnF8yI
TEDx Talks. Artificial intelligence in healthcare: opportunities and challenges | Navid Toosi Saidy | TEDxQUT. (Nov 18, 2021). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=uvqDTbusdUU
TEDx Talks. The future of AI in medicine | Conor Judge | TEDxGalway. (Nov 28, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=N3wJwz97b8A
TEDx Talks. AI in Healthcare: The Next Frontier | Leonardo Castorina | TEDxUniversityofEdinburgh. (Nov 27, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=3PbEgLw6lJ0
TEDx Talks. Will AI mean we no longer need doctors? | Enrico Coiera | TEDxMacquarieUniversity. (Oct 19, 2019). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=ZTp8r--YR84
TEDx Talks. Who will you be in Healthcare 4.0? | Tiffany Ma | TEDxUniversityofEdinburgh. (Nov 27, 2023). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=iFR24SDqlok
https://www.youtube.com/watch?v=k_-Z_TkHMqA
TEDx Talks. Democratizing Healthcare With AI | Lily Peng | TEDxGateway. (Jun 25, 2020). Last accessed on March 15, 2024.
https://www.youtube.com/watch?v=MNp26DgKxO
Google Cloud. What are AI hallucinations? Last accessed on March 15, 2024.
https://cloud.google.com/discover/what-are-ai-hallucinations#:~:text=AI%20hallucinations%20are%20incorrect%20or,used%20to%20train%20the%20model.
Wikipedia contributors. (2024, March 10). History of artificial intelligence. In Wikipedia, The Free Encyclopedia. Retrieved 04:03, March 16, 2024, from https://en.wikipedia.org/w/index.php?title=History_of_artificial_intelligence&oldid=1212891261
Acknowledgement
I would like to thank my family for supporting me and helping me edit and practice my project. I would also like to thank my science club teacher Mme. Lam who helped and supported us every step of the way! I would also like to thank my friends who made this project ten times more fun! Thanks to my uncle who informed me and taught me more about AI. I would also like to thank my family friend Guy Davis who helped me kick start my project by informing me about LLM. Thank you to everyone who helped and supported me along the way!