Forging Light with AI
AI is the most powerful tool to come out in an era. It will undoubtedly change the world. Let’s work... View more
Wilbur AI prototype demo
-
A few months ago we tasked ourselves to find out what it would take to make an ChatBot that would represent our vision of healthcare instead of just parroting the narratives from the CDC, NIH, WHO, etc.
Here is a link to one of our demos. I will also go over how we built it and some of the challenges we faced. This space is evolving rapidly, so some challenges may have better tooling and options than we did have when we did our SPIKE.
https://customer-lppli6s9nx5dxtbc.cloudflarestream.com/68781b728d0d5feeb88de4c25e28a0c9/watch
We used Stability AI’s Beluga LLM as the foundation. We tried several other open-source/uncensored models, but Beluga gave the best results by a wide margin.
This makes sense, given what the founder of Stability AI said in a recent interview. He has a focus on AI focused on personalized healthcare and treatment discovery.
https://youtu.be/Se91Pn3xxSs?t=3354
HuggingFace.co is a great place to quickly pull models into Web UI and TheBloke does a great job fine-tuning LLMs and has several for Stability’s Beluga:
https://huggingface.co/models?search=thebloke/stablebeluga
We used Text Generation UI as an easy way to manage and configure the chat interface. This is also what we used to generate LoRAs for our training data.
https://github.com/oobabooga/text-generation-webui
Mathew Berman has a lot of good content and tutorials. Here is one on running any LLM on RunPod and Web UI:
https://youtu.be/_59AsSyMERQ?si=11jQgsbhKaUrzoex
Training data was not all encompassing, as we have a staggering amount of information from many sources. We gathered a small, representative sample for this test.
We used Whisper AI (https://github.com/openai/whisper) to transcribe our conference lectures and other large video presentations.
We wrote some custom scripts to try and extra web and PDF content into something that the LLM can be trained on.
And finally, we used RunPods (https://www.runpod.io/) to host our demo at a low cost.
The demo looks promising, but still requires a lot of fine-tuning and RLHF to get it to answer from the perspective of FLCCC.
Training can be difficult, and tooling needs to be better for more easily gathering that content in a way that AI effectively understands.
We are also very uncertain on the cost to scale such a solution where it can be offered to our user base.
There are some really exciting projects we are looking at for the next round of prototypes that help solve some of these problems.
MemGPT (https://research.memgpt.ai/) is an incredible leap forward that allows the LLM to understand more information and retain long-term memory. Less likely to “forget the middle” when providing large context prompts.
Pinokio (https://pinokio.computer/) is a platform that would allow loading/unloading smaller, more specialized AIs on-demand instead of trying to create a larger “AGI”.
Stability AI is also focused on supported specialized AIs instead of general purpose, “knows everything” LLMs.
Our goal is to empower personal healthcare by making it far easier to find high-quality information from FLCCC and our partners.
If you have interest in helping us or if you have ideas, new research, etc., please let us know in the replies. We want to utilize the new community platform to make our AI dreams become a reality.
Log in to reply.