Download Meta Llama 3 ➡️ https://go.fb.me/0mr91h
Navyata Bawa from Meta will discuss some of the different ways to host or run Meta Llama models, including AWS, Kaggle, Vertex AI and others — and provide you with examples, demos and resources to help you get started.
# Timestamps
00:00 Intro
00:37 Running Llama in the cloud
01:03 Running Llama on AWS
01:49 Running Llama on Kaggle
02:22 Running Llama on Google Cloud Platform
03:13 Running Llama on API providers
03:43 Using TorchServe to serve Llama models
04:20 Using VLLM and TGI to deploy Llama
05:00 Llama Recipes repo and demos
06:03 Links and resources
# Additional Resources
• Run Llama 3 everywhere: https://llama.meta.com/docs/llama-everywhere
• Getting Started Guide: https://go.fb.me/90gu7x
• Running Llama on Hugging Face - Notebook: https://go.fb.me/6imdpa
• Running Llama 3 On-Prem Inference Using vLLM and TGI: https://go.fb.me/33dvvr
• Fine-tuning, inference and API provider recipes: https://go.fb.me/i05hes
• Getting to know Llama - Notebook: https://go.fb.me/mhe17z
• Llama recipes repo with use cases: https://go.fb.me/i05hes
• Prompt Engineering with Llama 2 & Llama 3 on Deeplearning.ai: https://go.fb.me/uu10tv
• Model Card: https://go.fb.me/y8cvs5
- - -
Subscribe: https://www.youtube.com/aiatmeta?sub_confirmation=1
Learn more about our work: https://ai.meta.com
# Follow us on social media
Follow us on Twitter: https://twitter.com/aiatmeta/
Follow us on LinkedIn: https://www.linkedin.com/showcase/aiatmeta
Follow us on Threads: https://threads.net/aiatmeta
Follow us on Facebook: https://www.facebook.com/AIatMeta/
Follow Navyata on Twitter: https://twitter.com/navyatabawa
Navyata Bawa from Meta will discuss some of the different ways to host or run Meta Llama models, including AWS, Kaggle, Vertex AI and others — and provide you with examples, demos and resources to help you get started.
# Timestamps
00:00 Intro
00:37 Running Llama in the cloud
01:03 Running Llama on AWS
01:49 Running Llama on Kaggle
02:22 Running Llama on Google Cloud Platform
03:13 Running Llama on API providers
03:43 Using TorchServe to serve Llama models
04:20 Using VLLM and TGI to deploy Llama
05:00 Llama Recipes repo and demos
06:03 Links and resources
# Additional Resources
• Run Llama 3 everywhere: https://llama.meta.com/docs/llama-everywhere
• Getting Started Guide: https://go.fb.me/90gu7x
• Running Llama on Hugging Face - Notebook: https://go.fb.me/6imdpa
• Running Llama 3 On-Prem Inference Using vLLM and TGI: https://go.fb.me/33dvvr
• Fine-tuning, inference and API provider recipes: https://go.fb.me/i05hes
• Getting to know Llama - Notebook: https://go.fb.me/mhe17z
• Llama recipes repo with use cases: https://go.fb.me/i05hes
• Prompt Engineering with Llama 2 & Llama 3 on Deeplearning.ai: https://go.fb.me/uu10tv
• Model Card: https://go.fb.me/y8cvs5
- - -
Subscribe: https://www.youtube.com/aiatmeta?sub_confirmation=1
Learn more about our work: https://ai.meta.com
# Follow us on social media
Follow us on Twitter: https://twitter.com/aiatmeta/
Follow us on LinkedIn: https://www.linkedin.com/showcase/aiatmeta
Follow us on Threads: https://threads.net/aiatmeta
Follow us on Facebook: https://www.facebook.com/AIatMeta/
Follow Navyata on Twitter: https://twitter.com/navyatabawa
- Category
- Meta Llama 3.2
- Tags
- Artificial Intelligence, Meta Llama, Llama 3
Be the first to comment

