In just a few short weeks, the vibrant San Jose Convention Center will host Intel® Innovation 2023, a global hub for tech enthusiasts, developers, and industry leaders. This year's event will focus on transformative topics like Artificial Intelligence's role in modern workspaces, edge to cloud technology, and strategies for building future-proof platforms.
As always, the event promises to provide unmissable insights into the future of technology. Startups will take center stage this year, showcasing success stories and demos. Six of our most promising AI startups will be featured from the Intel® Liftoff for Startups program. In addition to showcasing their amazing results, it will also be an opportunity to demonstrate Intel® Developer Cloud’s potential for GenAI SaaS.
Argilla is revolutionizing the open-source platform arena, showcasing the power of rapid LLM fine-tuning tailored for specific business applications, such as banking service assistants. Using Intel® hardware and APIs, Argilla demonstrates the transformative effects of fine-tuned versus standard models.
Merging Multimodal AI for enhanced enterprise insights, Beewant's Video Understanding demo harnesses textual and visual context to transform video interactions. With the Intel® Developer Cloud at its core, it provides real-time analysis, granting users unprecedented insights for more informed decisions.
This demo will focus on doctor-patient transcriptions. Prediction Guard’s solution overcomes challenges like data privacy, structure inconsistencies, and extraction reliability. Offering unparalleled, HIPAA-compliant, privacy-conserving data outputs, this platform is pioneering a new age of secure and reliable LLM implementations.
SiteMana will demo their personalized email-writing LLM model inspired by ChatGPT. Harnessing the power of Intel® GPUs, Dolly 2.0, and OpenLlama, the model is fine-tuned and rigorously tested on the Intel® Developer Cloud to guarantee superior performance and efficiency.
TurinTech will interactively demonstrate automatic generation of optimized code variations for an open-source project using LLMs on Intel® hardware. The platform refactors source code, presenting optimized versions to developers It also opens avenues for user-driven fine-tuning of LLMs, enhancing performance even further.
Weaviate stands at the intersection of LLMs and vector databases, offering users NLP-powered searches through vast amounts of unstructured data and data-informed responses via RAG. Hosted on the Intel® Developer Cloud, this demo enables users to search over 10 million Wikipedia articles, instantly.
Connect with Our Startups at Intel® Innovation
The event is a golden opportunity to interact with Intel® experts and the broader developer community, and learn more about the Intel® Liftoff program. Find out how the program can accelerate your AI startup’s growth, and test-drive the latest toolkits, use cases and performance-enhancing solutions.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.