Logo
    Search

    imitation-learning

    Explore "imitation-learning" with insightful episodes like "Orca 2: Enhancing Reasoning in Smaller Language Models - Example from Benchmarks and Output" and "Orca 2: Enhancing Reasoning in Smaller Language Models - Technical Details" from podcasts like ""Programming Tech Brief By HackerNoon" and "Programming Tech Brief By HackerNoon"" and more!

    Episodes (2)

    Orca 2: Enhancing Reasoning in Smaller Language Models - Example from Benchmarks and Output

    Orca 2: Enhancing Reasoning in Smaller Language Models - Example from Benchmarks and Output

    This story was originally published on HackerNoon at: https://hackernoon.com/orca-2-enhancing-reasoning-in-smaller-language-models-example-from-benchmarks-and-output.
    Orca 2 enhances small language models' reasoning by teaching diverse strategies for tasks, outperforming models up to 10x larger in complex benchmarks.
    Check more stories related to programming at: https://hackernoon.com/c/programming. You can also check exclusive content about #language-models, #orca-2, #reasoning-techniques, #machine-learning, #small-models, #imitation-learning, #ai-benchmarks, #model-training, and more.

    This story was written by: @textmodels. Learn more about this writer by checking @textmodels's about page, and for more stories, please visit hackernoon.com.

    Teaching Orca 2 to be a Cautious Reasoner is based on the work of Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Guoqing Zheng, Corby Rosset, Hamed Khanpour, and Ahmed Awadall.

    Orca 2: Enhancing Reasoning in Smaller Language Models - Technical Details

    Orca 2: Enhancing Reasoning in Smaller Language Models - Technical Details

    This story was originally published on HackerNoon at: https://hackernoon.com/orca-2-enhancing-reasoning-in-smaller-language-models-technical-details.
    Orca 2 enhances small language models' reasoning by teaching diverse strategies for tasks, outperforming models up to 10x larger in complex benchmarks.
    Check more stories related to programming at: https://hackernoon.com/c/programming. You can also check exclusive content about #language-models, #orca-2, #reasoning-techniques, #machine-learning, #small-models, #imitation-learning, #ai-benchmarks, #model-training, and more.

    This story was written by: @textmodels. Learn more about this writer by checking @textmodels's about page, and for more stories, please visit hackernoon.com.

    The Orca 2 dataset has four main sources:FLAN: Our main source of prompts for synthetic data generation is the FLAN-v2 Collection 33, which consists of five sub-collections. Following Orca 1 42, we consider tasks from only CoT, NiV2, T0, Flan 2021 and Dialogue. Some of the tasks are associated with an associated answer. For the Cautious Reasoning dataset we selected ~602 zero-shot user queries from the split of 1448 high quality tasks out of 1913.