Welcome - This classroom organization holds examples and links for this session.
Begin by adding a bookmark.
Chat and Clinical
π₯«Open Datasets for Health Careπ
- Datasets for open source or creative commons zero datasets and also links with PDF's for public clinical use:
Examples and Exercises - Create These Spaces in Your Account and Test / Modify
Easy Examples
- FastSpeech - https://huggingface.co/spaces/AIZero2HeroBootcamp/FastSpeech2LinerGradioApp
- Memory - https://huggingface.co/spaces/AIZero2HeroBootcamp/Memory
- StaticHTML5PlayCanvas - https://huggingface.co/spaces/AIZero2HeroBootcamp/StaticHTML5Playcanvas
- 3DHuman - https://huggingface.co/spaces/AIZero2HeroBootcamp/3DHuman
- TranscriptAILearnerFromYoutube - https://huggingface.co/spaces/AIZero2HeroBootcamp/TranscriptAILearnerFromYoutube
- AnimatedGifGallery - https://huggingface.co/spaces/AIZero2HeroBootcamp/AnimatedGifGallery
- VideoToAnimatedGif - https://huggingface.co/spaces/AIZero2HeroBootcamp/VideoToAnimatedGif
Hard Examples:
- ChatGPTandLangChain - https://huggingface.co/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain
a. Keys: https://platform.openai.com/account/api-keys
- MultiPDFQAChatGPTLangchain - https://huggingface.co/spaces/AIZero2HeroBootcamp/MultiPDF-QA-ChatGPT-Langchain
π Two easy ways to turbo boost your AI learning journey - Lets go 100X! π»
π AI Pair Programming with GPT
Open 2 Browsers to:
- π ChatGPT URL or URL2 and
- π Huggingface URL in separate browser windows.
- π€ Use prompts to generate a streamlit program on Huggingface or locally to test it.
- π§ For advanced work, add Python 3.10 and VSCode locally, and debug as gradio or streamlit apps.
- π Use these two superpower processes to reduce the time it takes you to make a new AI program! β±οΈ
π₯ YouTube University Method:
- ποΈββοΈ Plan two hours each weekday to exercise your body and brain.
- π¬ Make a playlist of videos you want to learn from on YouTube. Save the links to edit later.
- π Try watching the videos at a faster speed while exercising, and sample the first five minutes of each video.
- π Reorder the playlist so the most useful videos are at the front, and take breaks to exercise.
- π Practice note-taking in markdown to instantly save what you want to remember. Share your notes with others!
- π₯ AI Pair Programming Using Long Answer Language Models with Human Feedback
π₯ 2023 AI/ML Learning Playlists for ChatGPT, LLMs, Recent Events in AI:
- AI News: https://www.youtube.com/playlist?list=PLHgX2IExbFotMOKWOErYeyHSiikf6RTeX
- ChatGPT Code Interpreter: https://www.youtube.com/playlist?list=PLHgX2IExbFou1pOQMayB7PArCalMWLfU-
- Ilya Sutskever and Sam Altman: https://www.youtube.com/playlist?list=PLHgX2IExbFovr66KW6Mqa456qyY-Vmvw-
- Andrew Huberman on Neuroscience and Health: https://www.youtube.com/playlist?list=PLHgX2IExbFotRU0jl_a0e0mdlYU-NWy1r
- Andrej Karpathy: https://www.youtube.com/playlist?list=PLHgX2IExbFovbOFCgLNw1hRutQQKrfYNP
- Medical Futurist on GPT: https://www.youtube.com/playlist?list=PLHgX2IExbFosVaCMZCZ36bYqKBYqFKHB2
- ML APIs: https://www.youtube.com/playlist?list=PLHgX2IExbFovPX9z4m61rQImM7cDDY79L
- FastAPI and Streamlit: https://www.youtube.com/playlist?list=PLHgX2IExbFosyX2jzJJimPAI9C0FHflwB
- AI UI UX: https://www.youtube.com/playlist?list=PLHgX2IExbFosCUPzEp4bQaygzrzXPz81w
- ChatGPT Streamlit 2023: https://www.youtube.com/playlist?list=PLHgX2IExbFotDzxBRWwUBTb0_XFEr4Dlg
π₯ 2023 AI/ML Advanced Learning Playlists:
- 2023 QA Models and Long Form Question Answering NLP
- FHIR Bioinformatics Development Using AI/ML and Python, Streamlit, and Gradio - 2022
- 2023 ChatGPT for Coding Assistant Streamlit, Gradio and Python Apps
- 2023 BigScience Bloom - Large Language Model for AI Systems and NLP
- 2023 Streamlit Pro Tips for AI UI UX for Data Science, Engineering, and Mathematics
- 2023 Fun, New and Interesting AI, Videos, and AI/ML Techniques
- 2023 Best Minds in AGI AI Gamification and Large Language Models
- 2023 State of the Art for Vision Image Classification, Text Classification and Regression, Extractive Question Answering and Tabular Classification
- 2023 AutoML DataRobot and AI Platforms for Building Models, Features, Test, and Transparency
π₯«Open Datasets for Health Careπ
Azure Development Architectures in 2023:
- ChatGPT: https://azure.github.io/awesome-azd/?tags=chatgpt
- Azure OpenAI Services: https://azure.github.io/awesome-azd/?tags=openai
- Python: https://azure.github.io/awesome-azd/?tags=python
- AI LLM Architecture - Guidance by MS: https://github.com/microsoft/guidance
Dockerfile and Azure ACR->ACA Easy Robust Deploys from VSCode:
- Set up VSCode with Azure and Remote extensions and install Azure CLI locally
- Get access to azure subscriptions. Form there in VSCode, expand to Container Apps
- In Container Apps create new and pick Dockerfile to deploy to a ACR then ACA spin up using Azure to build.
Dockerfile for Streamlit and Dockerfile for FastAPI:
Show two examples.
Example Starter Prompts for AIPP:
Write a streamlit program that demonstrates Data synthesis.
Synthesize data from multiple sources to create new datasets.
Use two datasets and demonstrate pandas dataframe query merge and join
with two datasets in python list dictionaries:
List of Hospitals that are over 1000 bed count by city and state, and
State population size and square miles.
Perform a calculated function on the merged dataset.
Comparison of Large Language Models
Model Name |
Model Size (in Parameters) |
BigScience-tr11-176B |
176 billion |
GPT-3 |
175 billion |
OpenAI's DALL-E 2.0 |
500 million |
NVIDIA's Megatron |
8.3 billion |
Transformer-XL |
250 million |
XLNet |
210 million |
ChatGPT Datasets π
- WebText
- Common Crawl
- BooksCorpus
- English Wikipedia
- Toronto Books Corpus
- OpenWebText
ChatGPT Datasets - Details π
- WebText: A dataset of web pages crawled from domains on the Alexa top 5,000 list. This dataset was used to pretrain GPT-2.
- Common Crawl: A dataset of web pages from a variety of domains, which is updated regularly. This dataset was used to pretrain GPT-3.
- BooksCorpus: A dataset of over 11,000 books from a variety of genres.
- English Wikipedia: A dump of the English-language Wikipedia as of 2018, with articles from 2001-2017.
- Toronto Books Corpus: A dataset of over 7,000 books from a variety of genres, collected by the University of Toronto.
- OpenWebText: A dataset of web pages that were filtered to remove content that was likely to be low-quality or spammy. This dataset was used to pretrain GPT-3.
Big Science Model π
Datasets:
- Universal Dependencies: A collection of annotated corpora for natural language processing in a range of languages, with a focus on dependency parsing.
- WMT 2014: The fourth edition of the Workshop on Statistical Machine Translation, featuring shared tasks on translating between English and various other languages.
- The Pile: An English language corpus of diverse text, sourced from various places on the internet.
- HumanEval: A dataset of English sentences, annotated with human judgments on a range of linguistic qualities.
- FLORES-101: A dataset of parallel sentences in 101 languages, designed for multilingual machine translation.
- CrowS-Pairs: A dataset of sentence pairs, designed for evaluating the plausibility of generated text.
- WikiLingua: A dataset of parallel sentences in 75 languages, sourced from Wikipedia.
- MTEB: A dataset of English sentences, annotated with their entailment relationships with respect to other sentences.
- xP3: A dataset of English sentences, annotated with their paraphrase relationships with respect to other sentences.
- DiaBLa: A dataset of English dialogue, annotated with dialogue acts.
Deep RL ML Strategy π§
The AI strategies are:
- Language Model Preparation using Human Augmented with Supervised Fine Tuning π€
- Reward Model Training with Prompts Dataset Multi-Model Generate Data to Rank π
- Fine Tuning with Reinforcement Reward and Distance Distribution Regret Score π―
- Proximal Policy Optimization Fine Tuning π€
- Variations - Preference Model Pretraining π€
- Use Ranking Datasets Sentiment - Thumbs Up/Down, Distribution π
- Online Version Getting Feedback π¬
- OpenAI - InstructGPT - Humans generate LM Training Text π
- DeepMind - Advantage Actor Critic Sparrow, GopherCite π¦
- Reward Model Human Prefence Feedback π
For more information on specific techniques and implementations, check out the following resources:
- OpenAI's paper on GPT-3 which details their Language Model Preparation approach
- DeepMind's paper on SAC which describes the Advantage Actor Critic algorithm
- OpenAI's paper on Reward Learning which explains their approach to training Reward Models
- OpenAI's blog post on GPT-3's fine-tuning process