AI & Big Data Expo North America 2025, a specialized exhibition for artificial intelligence and big data, held
In the world of enterprise AI, trust, governance, and automation are key
Korean companies in the AI semiconductor sector seek diverse collaboration opportunities with AI service companies
From June 6th to 4th, the artificial intelligence and big data specialized exhibition 'AI & Big Data Expo North America 5' was held successfully at the Santa Clara Convention Center in California, USA. As part of the TechEx North America series, this exhibition is operated as a joint exhibition centered on AI and big data, integrating a total of eight technology fields including cyber security, the Internet of Things (IoT), digital transformation, intelligent automation, edge computing, and data centers, and is positioned as a comprehensive platform for technology-based corporate innovation.
[Data: Directly filmed at KOTRA Silicon Valley Trade Center]
TechEx is not limited to a single technology area, but rather, it is structured with both an exhibition and a conference, with technology leaders from various industries coming together to comprehensively cover the present and future of enterprise technology, including the actual adoption of AI technology, operational infrastructure, cybersecurity risks, and organizational innovation strategies.
In particular, this year's AI & Big Data Expo consisted of theme tracks covering all areas of AI, including enterprise AI, generative AI, machine learning, data analysis, ethical AI, and natural language processing (NLP). Around 250 executives from major global companies, including IBM, AWS, and SAP, participated as speakers and shared industry-specific adoption cases and future predictions. More than 70% of all attendees were senior executives at the director level or higher, so it served as a networking venue not only for technology exchange but also for practical commercialization and partnership exploration.

[Data: Directly filmed at KOTRA Silicon Valley Trade Center]
Enterprise Strategies for AI Scaling (Scaling AI for Transformation – Challenges and Solutions)
At the conference held in conjunction with the exhibition, executives from leading companies in the cloud and data infrastructure sector, including AWS, Dropbox, Oracle, OpenText, and Vultr, gathered together to discuss the realistic challenges and solutions that arise in the process of AI diffusion. The discussion focused on the problems that AI will face as it enters the scale-up phase where it will have a real impact on the entire industry, particularly data silos (a state where data within a company is stored in individual systems or DBs, making it impossible to share information between departments), the reliability of learning data, and hallucination (generation of incorrect answers).
Taimur Rashid, Director of AWS's Generative AI Innovation Center, emphasized that "AI strategy, data strategy, and business strategy should not be separate but should operate within a single integrated framework." In particular, while introducing the company's 'SageMaker' solution, he explained through case studies that meaningful AI utilization is possible only when data silos are resolved and various systems within the organization are connected through this integrated data management framework. He emphasized that no matter how good a model is, it is difficult to achieve results in an environment where data is scattered, and even if there are integrated data and a good AI model, meaningful results can be achieved only when they are combined with the company's business strategy.
Meanwhile, Josh Clemm, VP of engineering at Dropbox, said that large-scale unstructured data is emerging as a new challenge. To respond to this, Dropbox developed the 'Dash' platform that can integrate and manage third-party content, and introduced that it connects information from various sources such as distributed documents, chats, and emails into a single index (data structure). This is an infrastructure that supports AI to provide more contextual responses, and is evaluated as a basic technology for increasing the utilization of AI in practical environments.
Kevin Cochrane, CMO of Vulture, emphasized that companies should design ethical governance across the enterprise as well as technical performance improvements when introducing AI. He mentioned real-life cases related to the “hallucination” issue experienced by companies in the early stages of AI diffusion, and reminded that AI is still a technology that requires human verification. He emphasized the importance of risk management in introducing AI, such as how to collect training data, how to secure data reliability, and how to prevent hallucination before expanding AI across the enterprise.
This session provided insights on various levels, such as data structure, organizational structure, and technology-ethics integration strategy, rather than simply a technical perspective on AI diffusion strategy. In particular, the message that “the success of an AI project ultimately depends on data and infrastructure” and “AI is not a simple tool, but a medium for reorganizing organizational structure” were commonly emphasized, and it was a meaningful discussion where companies could obtain practical guidelines on ‘how to expand and how to operate sustainably’ after introducing AI.
The Present and Future of Enterprise Generative AI at the IBM Booth
One of the booths that drew attention at the AI & Big Data Expo 2025 was IBM. IBM held a seminar that systematically introduced the practical adoption and maturity cycle of generative AI (GenAI) in companies, focusing on the new book 『Generative AI for Business』 (O'Reilly Media, 2025) written by IBM's internal research team.

[Data: Directly filmed at KOTRA Silicon Valley Trade Center]
IBM went beyond simply introducing AI technology and presented a step-by-step roadmap for how organizations should strategically adopt and evolve generative AI. A particularly memorable part was the prediction that “in the final stage of generative AI, it will evolve into an ‘integrated automation system (Business General Intelligence)’ in which multiple agents autonomously collaborate to execute business activities.”
The generative AI maturity cycle introduced by IBM consists of the following five stages. The first is the introduction stage, during which a 'proof of concept (PoC) project' is conducted to test generative AI technology on a small scale in one or two departments within the organization. The second is the diffusion stage. If the initial experiment shows positive results, the use cases are expanded across multiple departments. At this stage, the return on investment (ROI) actually begins to be quantified, and the value of AI adoption is recognized throughout the organization. The third is the internalization stage, during which generative AI is fully integrated into the organization's daily workflow. At this time, the goal is to automate repetitive tasks and significantly improve work efficiency by utilizing large language models (LLM) and retrieval-augmented generation (RAG) technologies.
*(Note) Search-based generation (RAG) refers to an AI system that combines search and generation technologies. It is a method that focuses on improving the shortcomings of LLM, such as the possibility of factual errors and limitations in understanding context, by connecting a large-scale structured knowledge base to the model to generate evidence-based answers and including contextual information based on inference capabilities.
The fourth is the complex system construction stage. At this stage, generative AI goes beyond simply assisting human work, and multiple individual AI systems are organically connected to perform complex decision-making together. For example, a structure is established where one AI analyzes data and another AI suggests a response plan based on the results, collaborating. The fifth and final stage is the autonomous enterprise system stage, which is the most advanced form of generative AI utilization. At this stage, AI can understand the strategic goals of a company and establish and execute an action plan without human intervention. For example, a 'fully automated' work system in which AI establishes an entire marketing strategy, writes emails, schedules meetings, and analyzes subsequent performance becomes a reality.
Another key message emphasized in the seminar was that “organizations that do not utilize AI will have difficulty securing a competitive advantage.” The presenter emphasized that “employees in all departments should be able to use AI tools, and not just developers, but all members with AI utilization capabilities will become the competitiveness of the organization.” In fact, IBM introduced that it operates an in-house AI Center of Excellence (CoE) for this process and is actively investing in increasing the AI understanding of all employees.
Enterprise AI Market, the Warring States Period with Mixed Integrated Solutions vs. Niche Solutions
The booth introducing enterprise AI solutions was particularly noteworthy, where we could see a trend divided into integrated solutions that integrate all the tools a company needs into a single solution and niche solutions that apply AI to specialized services required by specific job groups or industries.
(1) Comprehensive AI Solutions: IBM & AWS
IBM and AWS also demonstrated their status as comprehensive AI solution providers at this exhibition. IBM introduced three core product lines centered around its AI platform, Watsonx. First, Watsonx.ai is a studio specializing in developing and training AI models, providing an environment where developers can directly train, adjust, and verify AI models using open source and IBM’s own LLM. Second, Watsonx.data is a data platform optimized for AI training by integrating and managing various types of data sources. Lastly, Watsonx.governance is a risk analysis tool to ensure reliability throughout the entire process of AI model creation, distribution, and use. In this way, IBM actively introduced a multi-agent system configuration strategy that performs enterprise-wide workflows beyond the level of simple chatbots, focusing on “Agentic AI for business” implementation cases based on Watsonx.
AWS is introducing Bedrock, a platform based on SageMaker that integrates infrastructure, APIs, and AI services for GenAI applications, and emphasizes security and scalability. Bedrock can apply various large-scale language models (such as Anthropic's Claude, Met's Llama, and Amazon's Titan) to applications by calling them in the form of APIs, and can build RAGs based on private data such as internal corporate data, making it optimized for corporate use where security is important. SageMaker is a platform that supports the entire AI development cycle, including building, training, deploying, and monitoring machine learning (ML) models. It is mainly suitable for enterprises that develop internal models or want to adjust and optimize third-party LLMs for corporate purposes.
<AWS booth introducing comprehensive AI solutions>

[Data: Directly filmed at KOTRA Silicon Valley Trade Center]
(2) Niche Solutions: Targeting specialized markets such as human resources management, compliance, and security management
Companies that provide solutions according to the various detailed needs within the company also stood out. Visier emphasized data-based decision-making of organizational personnel strategies through AI solutions specialized in HR data analysis. Visier's AI platform analyzes various indicators such as employee performance, turnover rate prediction, and diversity management in real time to support HR managers in making more strategic choices. In particular, the 'recruitment performance prediction model' using AI attracted a lot of attention as it presented a practical solution to companies struggling to secure and retain talent.
Trustero, specializing in compliance, introduced an automated solution that reduces the burden of audit response and regulatory compliance among various AI demands within the company. In particular, it provides essential functions to companies preparing for various standard certifications such as ISO 27001, SOC 2, and HIPAA, and is designed to significantly reduce resources compared to existing manual work through automated audit tracking and real-time status reporting functions by compliance item. It is expected to be widely utilized from startups handling sensitive data to enterprises.
As AI utilization increases, responding to cybersecurity issues is also important, and Britive has presented a solution for this. Britive has introduced a function that automates permission management (PAM) in a cloud environment with AI, automatically removing data access rights after real-time analysis or simplifying the approval process. This has attracted the attention of security professionals as a countermeasure that can reduce the security burden for companies whose system access has become complicated due to GenAI.
Korea Pavilion introducing promising companies in the AI semiconductor field
This Korean pavilion was operated by KOTRA, in collaboration with the Korea Semiconductor Research Association and the Korea-US AI Semiconductor Innovation Center, and three promising companies in the AI semiconductor field participated in the booth. The participating companies evaluated that this was the best opportunity to introduce AI semiconductors, as it is an exhibition mainly attended by AI solution companies and edge device (IoT) companies that incorporate AI functions. Within the large category of AI semiconductors, the fields of application differ depending on the characteristics of each product, and there are also differences in the cooperative companies that can create synergy, so we customized the attraction of visitors in advance according to the goals of each company.
<Overview of participating companies in the Korean Pavilion>
| Company name | About Us |
| Dnotitia | Seahorse, a vector database cloud that underlies RAG technology, and Mnemos, a personal LLM server |
| AiMFUTURE | Introducing high-performance NPU IP 'NeuroMosAIc', providing edge device optimized solutions |
| Mobilint | Introducing the NPU chip 'MLA100' and ultra-small AI SoC REGULUS that boast high performance and low power compared to existing GPUs |
[Data: Comprehensive data from KOTRA Silicon Valley Trade Center]
Aimfuture is an AI semiconductor IP company that was spun off from LG Electronics R&D Center in 2020. At this exhibition, it showcased its high-performance NPU IP 'NeuroMosAIc' series and edge AI board solutions. NeuroMosAIc has a structural feature that can simultaneously execute multiple neural network models, making it advantageous for complex AI tasks such as facial recognition and voice keyword detection that require multimodal AI (AI that processes and combines various types of data such as text, images, and audio for integrated analysis). The company has been collaborating with various fabless and design companies through IP licenses, and at the exhibition, it focused on introducing NPU solutions suitable for edge computing-based applications such as smart robotics, AIoT, smart cities, and autonomous driving. Among the companies visiting the site, a company that manufactures and operates smart recycling kiosks said, "We were using expensive GPUs to operate our solution that automatically sorts recyclable waste, but we think we can achieve the same performance at a lower price by using Aimfuture's NPU," and expressed its intention to positively explore cooperation plans.
Dinoticia is a company that provides solutions specialized in vector search and RAG technology, which are the foundation technologies of generative AI, by combining AI and semiconductor technology. In particular, the world's first self-developed VDPU (Vector Data Processing Unit)-based vector database 'Seahorse' and personal LLM device 'Mnemos' attracted much attention from visitors. Seahorse is a product that dramatically improved the speed and accuracy of vector search, which is the core of RAG-based generative AI, and Mnemos is a personal edge LLM device that enables privacy protection and offline operation. Company A, a global leader in the CPU market, visited the exhibition booth and discussed the possibility of collaborating with Dinoticia to develop an enterprise LLM solution that operates in a local environment, while raising issues with existing LLMs.
Mobilint is a domestic AI semiconductor startup established in 2019 that has developed NPU chips with high-performance and low-power characteristics. At this exhibition, the company showcased the MLA100 (AI acceleration PCIe card for edge) and REGULUS (standalone SoC for on-device AI). In particular, the MLA100 was introduced as a practical alternative to companies considering building lightweight AI systems such as edge servers or AI boxes, as it simultaneously boasts low power consumption and heat generation as well as high computational efficiency compared to existing GPUs. REGULUS is a new product launched in 2024. It is an ultra-small AI SoC that integrates CPU, ISP, and Codec, and is a solution optimized for the demand to embed AI functions in small smart devices. Throughout the exhibition, Mobilint actively exchanged technologies with local edge AI developers on the topics of GPU replacement and high-efficiency computing solutions. A professor in charge of a local university data center project who visited the Mobilint booth also offered to test the technology when operating an experimental data center on campus, providing an opportunity to build initial references in the US market.
implication
This AI & Big Data Expo North America 2025 was an important venue for Korean participating companies to go beyond simple product exhibitions and explore practical possibilities for cooperation with global AI solution companies. Mr. C, a Korean company representative we met at the event, said, “It was a meaningful place where we could gain insight into collaboration with companies that we had not previously considered or the possibility of applying technology through exchanges with various global companies that possess leading technologies in the fields of generative AI and edge AI.”
In addition, this exhibition confirmed that the trend of the enterprise AI market is rapidly evolving beyond simple model performance competition to data infrastructure optimization, ethical governance, and agent-based automation. In this flow, it was an opportunity to examine how Korean companies can position their technologies within the global ecosystem, and it was also an exhibition with strategic significance in terms of future entry into the U.S. market and finding joint development partners.
Data: Comprehensive data from AI & Big Data Expo North America, IBM, K-ASIC, AWS, and KOTRA Silicon Valley Trade Center



