Episode Link
https://share.snipd.com/episode/b1e87f81-98bf-4111-9206-990ae9bebb8f
Episode publish date
August 12, 2025 5:00 PM (UTC)
Last edit date
Aug 27, 2025 1:07 AM
Last snip date
August 27, 2025 2:02 AM (GMT+1)
Last sync date
August 27, 2025 2:03 AM (GMT+1)
Show
MLOps.community
Show notes link
https://podcasters.spotify.com/pod/show/mlops/episodes/LinkedIn-Recommender-System-Predictive-ML-vs-LLMs-e36n94c
Snips
11
Warning
⚠️ Any content within the episode information, snip blocks might be updated or overwritten by Snipd in a future sync. Add your edits or additional notes outside these blocks to keep them safe.
‣
Your snips
‣
[00:07] LLMs Reduce Feature Engineering Burden
‣
[07:09] Mitigate LLM Latency With Distillation Or Offline Use
‣
[09:16] Eval Criteria Stay The Same
‣
[11:58] Prompts Replace Much Of Feature Design
‣
[15:30] Compare LLMs Directly Before Replacing Models
‣
[18:29] LLMs Simplify Multi-Model Pipelines
‣
[24:21] Use LLMs For Planning; ML For Final Ranking
‣
[27:09] LLMs Ease Cold-Start Problems
‣
[31:49] Smaller Models Can Beat LLMs For Narrow Tasks
‣
[34:58] Be Cautious With Agentic AI In Production
‣
