Presentations
ATS brought together some of the most notable thought leaders in the Arm ecosystem, covering topics from the datacenter and the auto industry to IoT and consumer devices. Get the details here:
By the end of 2025 there will be more than 100 billion AI-capable Arm-powered devices in the world, all powered by the brightest leaders in this partner ecosystem. This enormous opportunity presents new challenges that we must come together to resolve ; delivering AI everywhere is bringing in a new level of power consumption and compute complexity that requires us to rethink everything.
The solution requires a platform that seamlessly combines power efficiency, optimized software, and integrated solutions that help bring your ideas to market faster. Join us as we highlight the technologies, solutions and the opportunity underpinning the Arm platform, and how together, we will build the AI compute platform for the future.
Takayuki Yokoyama hosts a panel discussion with Toshio Yoshida, VP, Executive Director, Fujitsu Limited and Shojiro Nakao, R&D Division Assistant Director, Panasonic Automotive Systems Co., Ltd.In this session, Arm will conduct the conversation with the partners to explore the AI opportunities in their own domains and share how they are going to address them with their AI strategies.
Semiconductors are key technologies indispensable for the realization of digitalization and decarbonization. Moreover, from the perspective of economic security, they are strategic products that significantly influence Japan's overall industrial competitiveness. It is crucial for our country's economic future growth to capture the increasing global demand of semiconductor. Today, countries worldwide are striving to secure semiconductor manufacturing capabilities, investing massive amounts of government funds into the semiconductor industry on a scale that was once unthinkable. In other words, the world has entered an era of international competition in industrial policy. Japan is also providing robust support to the semiconductor industry, realizing several large-scale investment projects in collaboration with the world's top players. These include the construction of an advanced logic semiconductor factory in Kumamoto in collaboration with Taiwan's TSMC, and the Rapidus Project, which aims for mass production of the most cutting-edge semiconductors in collaboration working together with IBM. Through these initiatives, we have not only filled the missing pieces in semiconductor technology that our country has lacked and strengthened the entire industry's supply chain, but also begun to create significant ripple effects on related industries and a virtuous cycle of investment and wage increases. We intend to continue and accelerate this trend in the future.
The use of ML and generative AI is rapidly shifting from hype into adoption, creating the need for more efficient inferencing at scale. Large language models are getting smaller and more specialized, offering comparable or improved performance at a fraction of the cost and energy. Advances in inferencing techniques, like quantization, sparse-coding, and the rise of specialized, lightweight frameworks like llama.cpp, enable LLMs to run on CPUs and provide good performance for a wide variety of use-cases. Arm has been advancing the capabilities of Neoverse cores to address both the compute and memory needs of LLMs, while maintaining its focus on efficiency. Popular ML frameworks like llama.cpp, PyTorch, and ML compilers allow easy migration of ML models to Arm-based cloud instances. These hardware and software improvements have led to an increase in on-CPU ML performance for use cases like LLMs and recommenders. This gives AI application developers flexibly in choosing CPUs or GPUs, depending on use case and sustainability targets. Arm is also making it possible for ML system designers to create their own bespoke ML solutions including accelerators combined with CPU chiplets to offer the best of both worlds.
Chiplets are offering automotive OEMs and suppliers more flexibility to customize their silicon as part of the shift to software-defined vehicles. As the industry embraces chiplet technology, it requires standards to ensure compatibility between chiplets from different providers and to create an easy-to-build platform. Here, we explore the excitement around chiplets in the automotive sector and Arm's role in supporting the development of standards and foundational compute platforms for its expansive ecosystem.
The latest Armv9 architecture delivers industry-leading architecture enhancements to help increase compute capabilities with more AI performance for each generation, from Matmul and Neon, to SVE2. Join this session for the inside track on enabling more efficient AI compute for your next-gen solution.
Virtual prototyping helps automotive semiconductor, tier 1, and OEM companies speed up development, boost productivity, cut costs and enhance hardware and software quality for future vehicles. This session explores the virtual prototyping offerings delivered by Arm partners using the latest Arm Automotive Enhanced (AE) technology, which can accelerate development cycles by up to two years.
AI at the edge is built on the foundation of robust framework of secure, connected devices to perform critical tasks in real-time. In this session, Linaro wants to stress the accent on the importance of a solution like ONELab, a solution that empowers businesses to build AI-driven edge devices that are secure, compliant, ready for deployment and faster to be launched on the market. With ONELab the full potential of AI at the edge for innovative applications has never been closer and easier.
The IoT ecosystem has already shipped billions of chips based on Arm, so its fair to say the IoT already runs on Arm. The IoT industry however, never stands still. We are seeing a rapid market acceleration that demands even higher performance solutions to deliver new and exciting use cases. We all therefore have to continue to innovate.
In this session, we present our hardware, software, and standards solutions that enable our customers and the entire Arm ecosystem to participate and win in this rapidly changing environment. Join us to learn more about the latest and greatest technology Arm is creating for the leaders in IoT to succeed.
Arm’s mission for relentless innovation for CPU Architecture means that we’re never standing still. In this talk we will explore the architectural challenges of AI, and how our architecture features will deliver significant improvements for running such AI and related workloads.
Arm Flexible Access and Arm Total Access enable a wide range of silicon and OEM partners with fast and easy access to Arm technology, tools and support through subscription access. Discover how easy evaluation, fewer contracts, and predictable cost is helping nearly 300 Arm partners get to market faster with their best products.
We’ll discuss how Arm accelerates a wide range of partner types, from start-ups through to high performance Cortex and Neoverse compute technology used by the biggest Arm partners.
The rapid evolution of AI is reshaping mobile and consumer devices, presenting new opportunities, such as creating innovative AI driven screen experiences and products that adapt seamlessly to user needs, changing levels of productivity and integrating into our daily lives.
Generative AI is one of the buzz word of the year and is becoming one of the pillars of the next generation of AI. Technology advances and the pace of innovation are increasing rapidly.
So how will the next generation of AI change compute requirments and redefine technology parameters in mobile and consumer devices?
Renesas' MCU/MPU products evolved by Arm technology offer innovative solutions across a wide range of areas, from collaboration with cloud to edge computing.
In this seminar, Renesas will explore the appeal of these versatile MCU/MPU products covering various fields and introduce specific product lines and solutions.
Let’s explore together the possibilities of technologies that will create the future.
The rapid expansion of AI has led to a major shift in infrastructure technology. Arm Neoverse emerges as platform of choice, offering the best combination of performance, efficiency, and design flexibility. This roadmap session explores how Arm Neoverse forms the foundation for partner innovation across cloud, wireless, networking, HPC, and edge, enabling the deployment of performant and vastly more efficient AI infrastructure on Arm. From Neoverse IP products to the Arm Neoverse Compute Subsystems (CSS) and Arm Total Design, we show why Arm Neoverse is the platform of choice for industry leaders and how it is pivotal in accelerating and shaping the future of AI infrastructure.
The automotive industry is undergoing one of its most significant architectural shifts, moving from traditional embedded systems to sophisticated software-defined, high-performance compute architectures. This session addresses the increased complexities for both hardware and software, while also highlighting the exceptional opportunities for the ecosystem to collaborate in creating AI-enabled, software-defined vehicles of the future.
Arm has worked with the Google AI Edge team to integrate KleidiAI into the MediaPipe framework through XNNPACK. These improvements increase the throughput of quantized LLMs running on Arm chips that contain the i8mm feature. This presentation will share new techniques for Android developers who want to efficiently run LLMs on-device.
The presence of AI has been in vehicles for at least a decade, though recent advances have supported its pervasiveness. AI is foundational for improving in-vehicle user experiences and automated features, yet it brings a set of unique challenges not faced by other industries. Here we discuss how the future can become reality and where there are opportunities to solve some of the greatest challenges.
Arm Compiler for Embedded (also known as AC6) is very widely used for software development for Arm-based products. Developed and supported by true Arm experts, combining early support for new Arm architectures and cores with highly competitive scores on key embedded benchmarks, and a safety-qualified variant for development of safety-critical systems, Arm Compiler for Embedded is THE professional embedded toolchain for Arm. We’re investing in significant changes for Arm Compiler for Embedded, to bring even more value to developers of Arm-based embedded products. These changes include POSIX support to enable use of Rich embedded Operating Systems, more security features for developers with interests in cyber-security and memory safety, and better compatibility with GCC. We’re also creating a free to use, 100% open source toolchain LLVM technology, with identical functionality and performance to our professional/commercial toolchain. Whichever compilation toolchain you currently use for Arm-based development, the changes discussed in this session are going to be of great interest to you.
Chip design has once again seen a paradigm shift. From the north and south bridges of the PC era, with the development of Moore's Law, all IPs were integrated into an SoC, and the chiplet architecture was born due to the development limit of Moore's Law. Egis/Alcor Micro will stand on the shoulders of two giants, ARM CSS V3 and TSMC, and combine the two key chiplet technologies - UCIe and CoWoS to launch the scalable, high-performance and most power-saving AI HPC Server solution. We will combine CPU chips, AI Accelerator chips and IO chips in a Lego-style, highly flexible form to create the AI HPC Server product that best meets customer needs.
Benchmarking ML inference is crucial for software developers as it ensures optimal performance and efficiency of machine learning workloads. Attendees will learn how to install and run TensorFlow on their Arm-based cloud servers and utilize the MLPerf Inference benchmark suite from MLCommons to evaluate ML performance.
KleidiAI is a set of micro-kernels that integrates into machine learning frameworks, accelerating AI inference on Arm-based platforms. These micro-kernels are hand-optimized in Arm assembly code to leverage modern architecture instructions, significantly speeding up AI inference on Arm CPUs. This presentation is an introductory topic for developers who are curious about how KleidiAI works and delivers such speedup.
Generative AI holds exciting potential for Edge AI applications, particularly in creating tangible business value with impactful use cases across industries and business functions. In the latter half of 2023, a trend emerged with the introduction of smaller, more efficient LLMs, such as Llama and tinyLLaMA by Meta, Gemini Nano by Google, and Phi by Microsoft. These advancements are facilitating the deployment of LLMs at the edge, ensuring data stays on the device, thus safeguarding individual privacy, and enhancing user experience by reducing latency and improving responsiveness. Join us as we unveil real-world examples of Arm-powered AI and Generative AI solutions spanning diverse industries. Explore how Arm enables you to harness the complete potential of Generative AI at the edge, even on the smallest devices, revolutionizing your business and sculpting a smarter, more interconnected future.
Speaker:
Dipti Vachani
Arm シニア・バイスプレジデント兼オートモーティブ事業部門ジェネラルマネージャー, Arm
By the end of 2025 there will be more than 100 billion AI-capable Arm-powered devices in the world, all powered by the brightest leaders in this partner ecosystem. This enormous opportunity presents new challenges that we must come together to resolve ; delivering AI everywhere is bringing in a new level of power consumption and compute complexity that requires us to rethink everything.
The solution requires a platform that seamlessly combines power efficiency, optimized software, and integrated solutions that help bring your ideas to market faster. Join us as we highlight the technologies, solutions and the opportunity underpinning the Arm platform, and how together, we will build the AI compute platform for the future.
Speaker:
横山 崇幸
アーム株式会社 代表取締役社長,
Takayuki Yokoyama hosts a panel discussion with Toshio Yoshida, VP, Executive Director, Fujitsu Limited and Shojiro Nakao, R&D Division Assistant Director, Panasonic Automotive Systems Co., Ltd.In this session, Arm will conduct the conversation with the partners to explore the AI opportunities in their own domains and share how they are going to address them with their AI strategies.
Speaker:
野原 諭
経済産業省商務情報政策局長, 経済産業省
Semiconductors are key technologies indispensable for the realization of digitalization and decarbonization. Moreover, from the perspective of economic security, they are strategic products that significantly influence Japan's overall industrial competitiveness. It is crucial for our country's economic future growth to capture the increasing global demand of semiconductor. Today, countries worldwide are striving to secure semiconductor manufacturing capabilities, investing massive amounts of government funds into the semiconductor industry on a scale that was once unthinkable. In other words, the world has entered an era of international competition in industrial policy. Japan is also providing robust support to the semiconductor industry, realizing several large-scale investment projects in collaboration with the world's top players. These include the construction of an advanced logic semiconductor factory in Kumamoto in collaboration with Taiwan's TSMC, and the Rapidus Project, which aims for mass production of the most cutting-edge semiconductors in collaboration working together with IBM. Through these initiatives, we have not only filled the missing pieces in semiconductor technology that our country has lacked and strengthened the entire industry's supply chain, but also begun to create significant ripple effects on related industries and a virtuous cycle of investment and wage increases. We intend to continue and accelerate this trend in the future.
Speaker:
The use of ML and generative AI is rapidly shifting from hype into adoption, creating the need for more efficient inferencing at scale. Large language models are getting smaller and more specialized, offering comparable or improved performance at a fraction of the cost and energy. Advances in inferencing techniques, like quantization, sparse-coding, and the rise of specialized, lightweight frameworks like llama.cpp, enable LLMs to run on CPUs and provide good performance for a wide variety of use-cases. Arm has been advancing the capabilities of Neoverse cores to address both the compute and memory needs of LLMs, while maintaining its focus on efficiency. Popular ML frameworks like llama.cpp, PyTorch, and ML compilers allow easy migration of ML models to Arm-based cloud instances. These hardware and software improvements have led to an increase in on-CPU ML performance for use cases like LLMs and recommenders. This gives AI application developers flexibly in choosing CPUs or GPUs, depending on use case and sustainability targets. Arm is also making it possible for ML system designers to create their own bespoke ML solutions including accelerators combined with CPU chiplets to offer the best of both worlds.
Speaker:
Chiplets are offering automotive OEMs and suppliers more flexibility to customize their silicon as part of the shift to software-defined vehicles. As the industry embraces chiplet technology, it requires standards to ensure compatibility between chiplets from different providers and to create an easy-to-build platform. Here, we explore the excitement around chiplets in the automotive sector and Arm's role in supporting the development of standards and foundational compute platforms for its expansive ecosystem.
Speaker:
The latest Armv9 architecture delivers industry-leading architecture enhancements to help increase compute capabilities with more AI performance for each generation, from Matmul and Neon, to SVE2. Join this session for the inside track on enabling more efficient AI compute for your next-gen solution.
Speaker:
Virtual prototyping helps automotive semiconductor, tier 1, and OEM companies speed up development, boost productivity, cut costs and enhance hardware and software quality for future vehicles. This session explores the virtual prototyping offerings delivered by Arm partners using the latest Arm Automotive Enhanced (AE) technology, which can accelerate development cycles by up to two years.
Speaker:
島田 源
Linaro日本支社 カントリーマネージャー, Linaro
AI at the edge is built on the foundation of robust framework of secure, connected devices to perform critical tasks in real-time. In this session, Linaro wants to stress the accent on the importance of a solution like ONELab, a solution that empowers businesses to build AI-driven edge devices that are secure, compliant, ready for deployment and faster to be launched on the market. With ONELab the full potential of AI at the edge for innovative applications has never been closer and easier.
Speaker:
The IoT ecosystem has already shipped billions of chips based on Arm, so its fair to say the IoT already runs on Arm. The IoT industry however, never stands still. We are seeing a rapid market acceleration that demands even higher performance solutions to deliver new and exciting use cases. We all therefore have to continue to innovate.
In this session, we present our hardware, software, and standards solutions that enable our customers and the entire Arm ecosystem to participate and win in this rapidly changing environment. Join us to learn more about the latest and greatest technology Arm is creating for the leaders in IoT to succeed.
Speaker:
Arm’s mission for relentless innovation for CPU Architecture means that we’re never standing still. In this talk we will explore the architectural challenges of AI, and how our architecture features will deliver significant improvements for running such AI and related workloads.
Speaker:
Arm Flexible Access and Arm Total Access enable a wide range of silicon and OEM partners with fast and easy access to Arm technology, tools and support through subscription access. Discover how easy evaluation, fewer contracts, and predictable cost is helping nearly 300 Arm partners get to market faster with their best products.
We’ll discuss how Arm accelerates a wide range of partner types, from start-ups through to high performance Cortex and Neoverse compute technology used by the biggest Arm partners.
Speaker:
The rapid evolution of AI is reshaping mobile and consumer devices, presenting new opportunities, such as creating innovative AI driven screen experiences and products that adapt seamlessly to user needs, changing levels of productivity and integrating into our daily lives.
Generative AI is one of the buzz word of the year and is becoming one of the pillars of the next generation of AI. Technology advances and the pace of innovation are increasing rapidly.
So how will the next generation of AI change compute requirments and redefine technology parameters in mobile and consumer devices?
Speaker:
杉山 幸範
ルネサスエレクトロニクス株式会社 ビジネス開拓部, Renesas
Renesas' MCU/MPU products evolved by Arm technology offer innovative solutions across a wide range of areas, from collaboration with cloud to edge computing.
In this seminar, Renesas will explore the appeal of these versatile MCU/MPU products covering various fields and introduce specific product lines and solutions.
Let’s explore together the possibilities of technologies that will create the future.
Speaker:
The rapid expansion of AI has led to a major shift in infrastructure technology. Arm Neoverse emerges as platform of choice, offering the best combination of performance, efficiency, and design flexibility. This roadmap session explores how Arm Neoverse forms the foundation for partner innovation across cloud, wireless, networking, HPC, and edge, enabling the deployment of performant and vastly more efficient AI infrastructure on Arm. From Neoverse IP products to the Arm Neoverse Compute Subsystems (CSS) and Arm Total Design, we show why Arm Neoverse is the platform of choice for industry leaders and how it is pivotal in accelerating and shaping the future of AI infrastructure.
Speaker:
The automotive industry is undergoing one of its most significant architectural shifts, moving from traditional embedded systems to sophisticated software-defined, high-performance compute architectures. This session addresses the increased complexities for both hardware and software, while also highlighting the exceptional opportunities for the ecosystem to collaborate in creating AI-enabled, software-defined vehicles of the future.
Speaker:
Arm has worked with the Google AI Edge team to integrate KleidiAI into the MediaPipe framework through XNNPACK. These improvements increase the throughput of quantized LLMs running on Arm chips that contain the i8mm feature. This presentation will share new techniques for Android developers who want to efficiently run LLMs on-device.
Speaker:
The presence of AI has been in vehicles for at least a decade, though recent advances have supported its pervasiveness. AI is foundational for improving in-vehicle user experiences and automated features, yet it brings a set of unique challenges not faced by other industries. Here we discuss how the future can become reality and where there are opportunities to solve some of the greatest challenges.
Speaker:
Arm Compiler for Embedded (also known as AC6) is very widely used for software development for Arm-based products. Developed and supported by true Arm experts, combining early support for new Arm architectures and cores with highly competitive scores on key embedded benchmarks, and a safety-qualified variant for development of safety-critical systems, Arm Compiler for Embedded is THE professional embedded toolchain for Arm. We’re investing in significant changes for Arm Compiler for Embedded, to bring even more value to developers of Arm-based embedded products. These changes include POSIX support to enable use of Rich embedded Operating Systems, more security features for developers with interests in cyber-security and memory safety, and better compatibility with GCC. We’re also creating a free to use, 100% open source toolchain LLVM technology, with identical functionality and performance to our professional/commercial toolchain. Whichever compilation toolchain you currently use for Arm-based development, the changes discussed in this session are going to be of great interest to you.
Speaker:
Chip design has once again seen a paradigm shift. From the north and south bridges of the PC era, with the development of Moore's Law, all IPs were integrated into an SoC, and the chiplet architecture was born due to the development limit of Moore's Law. Egis/Alcor Micro will stand on the shoulders of two giants, ARM CSS V3 and TSMC, and combine the two key chiplet technologies - UCIe and CoWoS to launch the scalable, high-performance and most power-saving AI HPC Server solution. We will combine CPU chips, AI Accelerator chips and IO chips in a Lego-style, highly flexible form to create the AI HPC Server product that best meets customer needs.
Speaker:
Benchmarking ML inference is crucial for software developers as it ensures optimal performance and efficiency of machine learning workloads. Attendees will learn how to install and run TensorFlow on their Arm-based cloud servers and utilize the MLPerf Inference benchmark suite from MLCommons to evaluate ML performance.
Speaker:
KleidiAI is a set of micro-kernels that integrates into machine learning frameworks, accelerating AI inference on Arm-based platforms. These micro-kernels are hand-optimized in Arm assembly code to leverage modern architecture instructions, significantly speeding up AI inference on Arm CPUs. This presentation is an introductory topic for developers who are curious about how KleidiAI works and delivers such speedup.
Speaker:
Generative AI holds exciting potential for Edge AI applications, particularly in creating tangible business value with impactful use cases across industries and business functions. In the latter half of 2023, a trend emerged with the introduction of smaller, more efficient LLMs, such as Llama and tinyLLaMA by Meta, Gemini Nano by Google, and Phi by Microsoft. These advancements are facilitating the deployment of LLMs at the edge, ensuring data stays on the device, thus safeguarding individual privacy, and enhancing user experience by reducing latency and improving responsiveness. Join us as we unveil real-world examples of Arm-powered AI and Generative AI solutions spanning diverse industries. Explore how Arm enables you to harness the complete potential of Generative AI at the edge, even on the smallest devices, revolutionizing your business and sculpting a smarter, more interconnected future.