RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models
Dec. 19th, 2024: The model, launched by Abu Dhabi's Technology Innovation Institute (TII), runs seamlessly in RaiderChip's AI accelerator, and its FPGA based demo.
Stay up to date with RaiderChip
Dec. 19th, 2024: The model, launched by Abu Dhabi's Technology Innovation Institute (TII), runs seamlessly in RaiderChip's AI accelerator, and its FPGA based demo.
Dec. 5th, 2024: The combination of this open-source model with RaiderChip's NPU enables Generative AI on-premises solutions trained in the 24 languages of the EU, being compliant with the strict European legal standards for data storage and processing.
Oct. 1st, 2024: The company incorporates the latest model, presented by Meta less than a week ago, into the catalog of LLMs already accelerated on a wide range of FPGAs.
Sept. 20th, 2024: The new version brings a 276% speed increase for the top LLMs in low-cost systems, while maintaining their intelligence.
July 4th, 2024: Participants had the opportunity to chat in real-time with the Microsoft Phi-2 LLM, fully accelerated inside RaiderChip’s Gen AI Hardware IP core on an offline AMD Versal FPGA.
June 4th, 2024: RaiderChip launches its Generative AI hardware accelerator for LLM models on low-cost FPGAs.