Presentations
ATS brought together some of the most notable thought leaders in the Arm ecosystem, covering topics from the datacenter and the auto industry to IoT and consumer devices. Get the details here:
By the end of 2025 there will be more than 100 billion AI-capable Arm-powered devices in the world, all powered by the brightest leaders in this partner ecosystem. This enormous opportunity presents new challenges that we must come together to resolve ; delivering AI everywhere is bringing in a new level of power consumption and compute complexity that requires us to rethink everything.
The solution requires a platform that seamlessly combines power efficiency, optimized software, and integrated solutions that help bring your ideas to market faster. Join us as we highlight the technologies, solutions and the opportunity underpinning the Arm platform, and how together, we will build the AI compute platform for the future.
SW Hwang hosts a panel discussion with Jinwook Oh, CTO, Rebellions and JangKyu Lee, CEO/President, Telechips Inc.
In this session, Arm will conduct the conversation with the partners to explore the AI opportunities in their own domains and share how they are going to address them with their AI strategies.
scalability and competitiveness of modern computing architectures. With these strategies and visions, attendees will gain insight into the specific direction of the partnership between ADTechnology and Arm expected to establish specific directions for positioning itself as a key driver of future technological advancement.
Regarding chip design and implementation, the presentation will cover the evolution to Arm Neoverse V3 and the development progress from 5nm to 2nm nodes, highlighting ADTechnology’s strategy for maximizing performance and energy efficiency. Additionally, it will address business collaboration and co-development opportunities leveraging Arm’s ecosystem to accelerate time-to-market and lower market entry barriers.
The IoT ecosystem has already shipped billions and billions of chips based on Arm, so its fair to say the IoT already runs on Arm. The IoT industry however, never stands still. We are seeing a rapid acceleration in the market that demands even higher performance solutions in order to deliver new and exciting use cases. We all therefore have to continue to innovate.
In this session, we will present our hardware, software, and standards solutions that will enable our customers and the entire Arm Ecosystem to participate and win in this rapidly changing environment. Join us to learn more about the latest and greatest technology Arm is creating for the leaders in IoT to succeed.
Arm has worked with the Google AI Edge team to integrate KleidiAI into the MediaPipe framework through XNNPACK. These improvements increase the throughput of quantized LLMs running on Arm chips that contain the i8mm feature. This presentation will share new techniques for Android developers who want to efficiently run LLMs on-device.
This Soc is designed for efficient Vision AI perfomance. Unlikely exsiting SoC, it's divided and designed with a dedicated Sub system for specific vision processing and AI processing / operating .
This design approach provides benefits for performance, flexibility and development period.
Main Subsystem & Function
1. CPU subsystem : Main CPU(Cortex-A76 Quad 2clusters)와 L1&L2 Cache, GIC(Generic Interrupt Controller) & Debugging subsystem(Coresight)
2. ISP(Image Signal Processor) : Through Camera sensor, raw image datademosaicing, noise reduction, color correction, white balance adjustment are processing
3. NPU :Through AI algorithm accelerator for Vision, object detection, classification & segmentation using CNN or Transfromer
4. GPU : Addtional Acceleratotion for specific vision or AI tasks in conjuction with NPU
5. Memory subsystem : High bandwidth memory access for high-speed data transfer & sharing between each subsystems.
Arm Flexible Access and Arm Total Access enable a wide range of silicon and OEM partners with fast and easy access to Arm technology, tools and support through subscription access. Discover how easy evaluation, fewer contracts, and predictable cost is helping nearly 300 Arm partners get to market faster with their best products.
We’ll discuss how Arm accelerates a wide range of partner types, from startups through to high performance Cortex and Neoverse compute technology used by the biggest Arm partners.
The use of ML and generative AI is rapidly shifting from hype into adoption, creating the need for more efficient inferencing at scale. Large language models are getting smaller and more specialized, offering comparable or improved performance at a fraction of the cost and energy. Advances in inferencing techniques, like quantization, sparse-coding, and the rise of specialized, lightweight frameworks like llama.cpp, enable LLMs to run on CPUs and provide good performance for a wide variety of use-cases. Arm has been advancing the capabilities of Neoverse cores to address both the compute and memory needs of LLMs, while maintaining its focus on efficiency. Popular ML frameworks like llama.cpp, PyTorch, and ML compilers allow easy migration of ML models to Arm-based cloud instances. These hardware and software improvements have led to an increase in on-CPU ML performance for use cases like LLMs and recommenders. This gives AI application developers flexibly in choosing CPUs or GPUs, depending on use case and sustainability targets. Arm is also making it possible for ML system designers to create their own bespoke ML solutions including accelerators combined with CPU chiplets to offer the best of both worlds.
The presence of AI has been in vehicles for at least a decade, though recent advances have supported its pervasiveness. AI is foundational for improving in-vehicle user experiences and automated features, yet it brings a set of unique challenges not faced by other industries. Here we discuss how the future can become reality and where there are opportunities to solve some of the greatest challenges.
Chiplets are offering automotive OEMs and suppliers more flexibility to customize their silicon as part of the shift to software-defined vehicles. As the industry embraces chiplet technology, it requires standards to ensure compatibility between chiplets from different providers and to create an easy-to-build platform. Here, we explore the excitement around chiplets in the automotive sector and Arm's role in supporting the development of standards and foundational compute platforms for its expansive ecosystem.
The latest Armv9 architecutre delivers industry-leading architecture enhancements to help increase compute capabilities with more AI performance for each generation, from Matmul and Neon, to SVE2. Join this session for the inside track on enabling more efficienct AI compute for your next-gen solution.
Moving workloads from x86_64 to the Arm architecture can be challenging. The main porting challenge is to make effective use of architectural features such as SVE2, Cache accesses and SIMD Vectorisation, which requires performance insights to determine how well these hardware features are being utilised. Compilers and libraries offer varying degrees of performance improvements by making effective use of the underlying hardware architecture, but they do not always get it right, which can cause performance degradation during the initial porting phase. Linaro Forge provides the performance insights to quickly identify performance hot spots that helps to determine if the code is utilising the hardware effectively. This in turn enables the user to tune their application for optimal performance on the hardware that their code is being migrated to.
Generative AI holds exciting potential for Edge AI applications, particularly in creating tangible business value with impactful use cases across industries and business functions. In the latter half of 2023, a trend emerged with the introduction of smaller, more efficient LLMs, such as Llama and tinyLLaMA by Meta, Gemini Nano by Google, and Phi by Microsoft. These advancements are facilitating the deployment of LLMs at the edge, ensuring data stays on the device, thus safeguarding individual privacy, and enhancing user experience by reducing latency and improving responsiveness. Join us as we unveil real-world examples of Arm-powered AI and Generative AI solutions spanning diverse industries. Explore how Arm enables you to harness the complete potential of Generative AI at the edge, even on the smallest devices, revolutionizing your business and sculpting a smarter, more interconnected future.
The rapid evolution of AI is reshaping mobile and consumer devices, presenting new opportunities, such as creating innovative AI driven screen experiences and products that adapt seamlessly to user needs, changing levels of productivity and integrating into our daily lives.
Generative AI is one of the buzz word of the year and is becoming one of the pillars of the next generation of AI. Technology advances and the pace of innovation are increasing rapidly.
So how will the next generation of AI change compute requirments and redefine technology parameters in mobile and consumer devices?
AI at the edge is built on the foundation of robust framework of secure, connected devices to perform critical tasks in real-time. In this session, Linaro wants to stress the accent on the importance of a solution like ONELab, a solution that empowers businesses to build AI-driven edge devices that are secure, compliant, ready for deployment and faster to be launched on the market. With ONELab the full potential of AI at the edge for innovative applications has never been closer and easier.
The rapid expansion of AI has led to a major shift in infrastructure technology. Arm Neoverse emerges as platform of choice, offering the best combination of performance, efficiency, and design flexibility. This roadmap session explores how Arm Neoverse forms the foundation for partner innovation across cloud, wireless, networking, HPC, and edge, enabling the deployment of performant and vastly more efficient AI infrastructure on Arm. From Neoverse IP products to the Arm Neoverse Compute Subsystems (CSS) and Arm Total Design, we show why Arm Neoverse is the platform of choice for industry leaders and how it is pivotal in accelerating and shaping the future of AI infrastructure.
Chip design has once again seen a paradigm shift. From the north and south bridges of the PC era, with the development of Moore's Law, all IPs were integrated into an SoC, and the chiplet architecture was born due to the development limit of Moore's Law. Egis/Alcor Micro will stand on the shoulders of two giants, ARM CSS V3 and TSMC, and combine the two key chiplet technologies - UCIe and CoWoS to launch the scalable, high-performance and most power-saving AI HPC Server solution. We will combine CPU chips, AI Accelerator chips and IO chips in a Lego-style, highly flexible form to create the AI HPC Server product that best meets customer needs.
Benchmarking ML inference is crucial for software developers as it ensures optimal performance and efficiency of machine learning workloads. Attendees will learn how to install and run TensorFlow on their Arm-based cloud servers and utilize the MLPerf Inference benchmark suite from MLCommons to evaluate ML performance.
Arm’s mission for relentless innovation for CPU Architecture means that we’re never standing still. In this talk we will explore the architectural challenges of AI, and how our architecture features will deliver significant improvements for running such AI and related workloads.
KleidiAI is a set of micro-kernels that integrates into machine learning frameworks, accelerating AI inference on Arm-based platforms. These micro-kernels are hand-optimized in Arm assembly code to leverage modern architecture instructions, significantly speeding up AI inference on Arm CPUs. This presentation is an introductory topic for developers who are curious about how KleidiAI works and delivers such speedup.
Speaker:
James McNiven
Arm 客户业务线产品管理副总裁,
By the end of 2025 there will be more than 100 billion AI-capable Arm-powered devices in the world, all powered by the brightest leaders in this partner ecosystem. This enormous opportunity presents new challenges that we must come together to resolve ; delivering AI everywhere is bringing in a new level of power consumption and compute complexity that requires us to rethink everything.
The solution requires a platform that seamlessly combines power efficiency, optimized software, and integrated solutions that help bring your ideas to market faster. Join us as we highlight the technologies, solutions and the opportunity underpinning the Arm platform, and how together, we will build the AI compute platform for the future.
Speaker:
황선욱
사장, Arm Korea, Arm Korea
SW Hwang hosts a panel discussion with Jinwook Oh, CTO, Rebellions and JangKyu Lee, CEO/President, Telechips Inc.
In this session, Arm will conduct the conversation with the partners to explore the AI opportunities in their own domains and share how they are going to address them with their AI strategies.
Speaker:
박준규
CEO, ADTechnology, ADTechnology
scalability and competitiveness of modern computing architectures. With these strategies and visions, attendees will gain insight into the specific direction of the partnership between ADTechnology and Arm expected to establish specific directions for positioning itself as a key driver of future technological advancement.
Regarding chip design and implementation, the presentation will cover the evolution to Arm Neoverse V3 and the development progress from 5nm to 2nm nodes, highlighting ADTechnology’s strategy for maximizing performance and energy efficiency. Additionally, it will address business collaboration and co-development opportunities leveraging Arm’s ecosystem to accelerate time-to-market and lower market entry barriers.
Speaker:
The IoT ecosystem has already shipped billions and billions of chips based on Arm, so its fair to say the IoT already runs on Arm. The IoT industry however, never stands still. We are seeing a rapid acceleration in the market that demands even higher performance solutions in order to deliver new and exciting use cases. We all therefore have to continue to innovate.
In this session, we will present our hardware, software, and standards solutions that will enable our customers and the entire Arm Ecosystem to participate and win in this rapidly changing environment. Join us to learn more about the latest and greatest technology Arm is creating for the leaders in IoT to succeed.
Speaker:
Arm has worked with the Google AI Edge team to integrate KleidiAI into the MediaPipe framework through XNNPACK. These improvements increase the throughput of quantized LLMs running on Arm chips that contain the i8mm feature. This presentation will share new techniques for Android developers who want to efficiently run LLMs on-device.
Speaker:
구교근
CTO SoC Design & Verification, CoAsia, CoAsia
This Soc is designed for efficient Vision AI perfomance. Unlikely exsiting SoC, it's divided and designed with a dedicated Sub system for specific vision processing and AI processing / operating .
This design approach provides benefits for performance, flexibility and development period.
Main Subsystem & Function
1. CPU subsystem : Main CPU(Cortex-A76 Quad 2clusters)와 L1&L2 Cache, GIC(Generic Interrupt Controller) & Debugging subsystem(Coresight)
2. ISP(Image Signal Processor) : Through Camera sensor, raw image datademosaicing, noise reduction, color correction, white balance adjustment are processing
3. NPU :Through AI algorithm accelerator for Vision, object detection, classification & segmentation using CNN or Transfromer
4. GPU : Addtional Acceleratotion for specific vision or AI tasks in conjuction with NPU
5. Memory subsystem : High bandwidth memory access for high-speed data transfer & sharing between each subsystems.
Speaker:
Arm Flexible Access and Arm Total Access enable a wide range of silicon and OEM partners with fast and easy access to Arm technology, tools and support through subscription access. Discover how easy evaluation, fewer contracts, and predictable cost is helping nearly 300 Arm partners get to market faster with their best products.
We’ll discuss how Arm accelerates a wide range of partner types, from startups through to high performance Cortex and Neoverse compute technology used by the biggest Arm partners.
Speaker:
The use of ML and generative AI is rapidly shifting from hype into adoption, creating the need for more efficient inferencing at scale. Large language models are getting smaller and more specialized, offering comparable or improved performance at a fraction of the cost and energy. Advances in inferencing techniques, like quantization, sparse-coding, and the rise of specialized, lightweight frameworks like llama.cpp, enable LLMs to run on CPUs and provide good performance for a wide variety of use-cases. Arm has been advancing the capabilities of Neoverse cores to address both the compute and memory needs of LLMs, while maintaining its focus on efficiency. Popular ML frameworks like llama.cpp, PyTorch, and ML compilers allow easy migration of ML models to Arm-based cloud instances. These hardware and software improvements have led to an increase in on-CPU ML performance for use cases like LLMs and recommenders. This gives AI application developers flexibly in choosing CPUs or GPUs, depending on use case and sustainability targets. Arm is also making it possible for ML system designers to create their own bespoke ML solutions including accelerators combined with CPU chiplets to offer the best of both worlds.
Speaker:
The presence of AI has been in vehicles for at least a decade, though recent advances have supported its pervasiveness. AI is foundational for improving in-vehicle user experiences and automated features, yet it brings a set of unique challenges not faced by other industries. Here we discuss how the future can become reality and where there are opportunities to solve some of the greatest challenges.
Speaker:
Chiplets are offering automotive OEMs and suppliers more flexibility to customize their silicon as part of the shift to software-defined vehicles. As the industry embraces chiplet technology, it requires standards to ensure compatibility between chiplets from different providers and to create an easy-to-build platform. Here, we explore the excitement around chiplets in the automotive sector and Arm's role in supporting the development of standards and foundational compute platforms for its expansive ecosystem.
Speaker:
The latest Armv9 architecutre delivers industry-leading architecture enhancements to help increase compute capabilities with more AI performance for each generation, from Matmul and Neon, to SVE2. Join this session for the inside track on enabling more efficienct AI compute for your next-gen solution.
Speaker:
Grant Likely
Chief Technology Officer, Linaro,
Moving workloads from x86_64 to the Arm architecture can be challenging. The main porting challenge is to make effective use of architectural features such as SVE2, Cache accesses and SIMD Vectorisation, which requires performance insights to determine how well these hardware features are being utilised. Compilers and libraries offer varying degrees of performance improvements by making effective use of the underlying hardware architecture, but they do not always get it right, which can cause performance degradation during the initial porting phase. Linaro Forge provides the performance insights to quickly identify performance hot spots that helps to determine if the code is utilising the hardware effectively. This in turn enables the user to tune their application for optimal performance on the hardware that their code is being migrated to.
Speaker:
Generative AI holds exciting potential for Edge AI applications, particularly in creating tangible business value with impactful use cases across industries and business functions. In the latter half of 2023, a trend emerged with the introduction of smaller, more efficient LLMs, such as Llama and tinyLLaMA by Meta, Gemini Nano by Google, and Phi by Microsoft. These advancements are facilitating the deployment of LLMs at the edge, ensuring data stays on the device, thus safeguarding individual privacy, and enhancing user experience by reducing latency and improving responsiveness. Join us as we unveil real-world examples of Arm-powered AI and Generative AI solutions spanning diverse industries. Explore how Arm enables you to harness the complete potential of Generative AI at the edge, even on the smallest devices, revolutionizing your business and sculpting a smarter, more interconnected future.
Speaker:
The rapid evolution of AI is reshaping mobile and consumer devices, presenting new opportunities, such as creating innovative AI driven screen experiences and products that adapt seamlessly to user needs, changing levels of productivity and integrating into our daily lives.
Generative AI is one of the buzz word of the year and is becoming one of the pillars of the next generation of AI. Technology advances and the pace of innovation are increasing rapidly.
So how will the next generation of AI change compute requirments and redefine technology parameters in mobile and consumer devices?
Speaker:
Anmar Oueja
Head of Product Management, Linaro, Linaro
AI at the edge is built on the foundation of robust framework of secure, connected devices to perform critical tasks in real-time. In this session, Linaro wants to stress the accent on the importance of a solution like ONELab, a solution that empowers businesses to build AI-driven edge devices that are secure, compliant, ready for deployment and faster to be launched on the market. With ONELab the full potential of AI at the edge for innovative applications has never been closer and easier.
Speaker:
The rapid expansion of AI has led to a major shift in infrastructure technology. Arm Neoverse emerges as platform of choice, offering the best combination of performance, efficiency, and design flexibility. This roadmap session explores how Arm Neoverse forms the foundation for partner innovation across cloud, wireless, networking, HPC, and edge, enabling the deployment of performant and vastly more efficient AI infrastructure on Arm. From Neoverse IP products to the Arm Neoverse Compute Subsystems (CSS) and Arm Total Design, we show why Arm Neoverse is the platform of choice for industry leaders and how it is pivotal in accelerating and shaping the future of AI infrastructure.
Speaker:
Chip design has once again seen a paradigm shift. From the north and south bridges of the PC era, with the development of Moore's Law, all IPs were integrated into an SoC, and the chiplet architecture was born due to the development limit of Moore's Law. Egis/Alcor Micro will stand on the shoulders of two giants, ARM CSS V3 and TSMC, and combine the two key chiplet technologies - UCIe and CoWoS to launch the scalable, high-performance and most power-saving AI HPC Server solution. We will combine CPU chips, AI Accelerator chips and IO chips in a Lego-style, highly flexible form to create the AI HPC Server product that best meets customer needs.
Speaker:
Benchmarking ML inference is crucial for software developers as it ensures optimal performance and efficiency of machine learning workloads. Attendees will learn how to install and run TensorFlow on their Arm-based cloud servers and utilize the MLPerf Inference benchmark suite from MLCommons to evaluate ML performance.
Speaker:
Arm’s mission for relentless innovation for CPU Architecture means that we’re never standing still. In this talk we will explore the architectural challenges of AI, and how our architecture features will deliver significant improvements for running such AI and related workloads.
Speaker:
KleidiAI is a set of micro-kernels that integrates into machine learning frameworks, accelerating AI inference on Arm-based platforms. These micro-kernels are hand-optimized in Arm assembly code to leverage modern architecture instructions, significantly speeding up AI inference on Arm CPUs. This presentation is an introductory topic for developers who are curious about how KleidiAI works and delivers such speedup.