In Part 1 of this series, we explored the importance of legal professionals being at the AI table. This second article examines how systematic monitoring of emerging trends, known as "horizon scanning," can help legal teams navigate the rapidly evolving AI landscape.
Understanding Horizon Scanning in the AI Context
The pace of change in AI is unprecedented. ChatGPT reached 100 million users just two months after launch, making it the fastest-growing consumer app in history. With new AI applications and regulations emerging continually, legal teams benefit from anticipating developments rather than merely reacting to them.
This forward-looking approach involves monitoring across three key domains:
- Technological developments: Emerging AI capabilities that might impact the business.
- Political and regulatory movements: Policy initiatives and upcoming frameworks, from the EU's AI Act to UK regulations and guidance.
- Market and social trends: Public sentiment, industry standards, and best practices.
Organisations can adapt strategically rather than reactively by identifying changes that could pose risks or create opportunities.
Key Areas of Attention in 2025
Several interconnected AI-related trends are currently significant:
Data Protection and Security
AI systems process vast amounts of data, which creates information management challenges. In one notable incident, engineers at a big player in the tech industry leaked sensitive corporate data by pasting source code into ChatGPT, which resulted in the restriction of the use of AI tools. A recent industry survey found that 27% of organisations have banned generative AI due to data security concerns, with nearly half reporting instances of confidential information being entered into AI tools.
Ethical, Compliance and Reputational Dimensions
AI implementations carry interconnected ethical, legal and reputational implications:
- Algorithmic Fairness: Machine learning models have exhibited unexpected patterns in personal loans, recruitment, and criminal justice contexts, potentially leading to discrimination claims.
- Evolving Standards: Bodies like ISO and IEEE are developing guidelines that, while voluntary, increasingly serve as expected benchmarks.
- Stakeholder Perception: Companies have faced backlash over AI implementations, from consumer concerns about biased algorithms to employee protests against perceived unethical projects.
As noted by Cisco's Chief Legal Officer, the ability to preserve stakeholder trust in the AI era requires thoughtful governance approaches that address these dimensions holistically.
"Shadow AI" and Verification Challenges
The democratisation of AI tools creates oversight challenges. Departments may experiment with powerful AI capabilities independently and may sometimes bypass formal approval processes. In one widely discussed case at the First-tier Tax Tribunal (the “FTT”), the litigant faced sanctions after relying on AI-generated case citations that were not genuine. This highlights the risks and need for caution when using AI outputs without verification. External misuse of generative AI for deepfakes, phishing, or IP theft creates additional organisational vulnerabilities.
Organisational Approaches to Effective Monitoring
Forward-thinking organisations have begun to adopt structured approaches to AI development oversight:
Coordinated Intelligence Gathering
Rather than siloed monitoring, effective organisations establish cross-functional collaboration through committees or groups that include legal, compliance, IT, security, data science, and business unit representatives. This creates a valuable two-way information flow: legal shares external risk intelligence while operational teams provide ground-level implementation insights.
Such coordination extends to leadership communication, with boards increasingly concerned about how emerging technologies affect corporate risk and strategy. Regular briefings help keep AI governance on the leadership agenda proactively.
Comprehensive AI Mapping
Creating a thorough inventory of AI implementations across the organisation serves as a foundation for effective monitoring. This typically reveals:
- Technology Landscape: What AI tools and systems are being used, by whom, and for what purposes.
- Data Flows: What information these systems use, where it comes from, and where outputs go.
- Risk Profiles: Which implementations present higher risk based on data sensitivity, application context, or regulatory exposure.
This mapping helps identify "shadow AI" projects and distinguish between lower-risk internal tools and higher-risk customer-facing applications. An assessment of how data flows through these systems can help alleviate key compliance concerns. For example, personal data may trigger privacy regulations, while the use of third-party data can raise intellectual property issues.
Multi-Channel Monitoring Systems
Organisations establish processes to track AI developments across multiple information streams:
- Regulatory and Legal: Following legislative developments, court decisions, and regulatory guidance.
- Technological: Monitoring research breakthroughs and product innovations.
- Industry and Market: Tracking peer activities, sector standards, and public discourse.
This often involves specialised information sources like legal update services, AI-specific publications, and industry association resources, supported by monitoring technologies that help manage information volume.
Analytical Frameworks and Knowledge Management
Beyond collecting information, effective organisations apply structured analysis:
- Pattern Recognition: Identify emerging trends in regulatory focus or industry adoption.
- Scenario Development: Create plausible "what if" exercises to test organisational readiness.
- Integration with Risk Management: Connect AI insights with broader enterprise risk frameworks.
These analytical processes are supported by centralised knowledge repositories where team members document noteworthy developments, creating an institutional memory that informs future decisions.
Four Practical Implementations to Consider
For legal departments who wish to establish or enhance their horizon scan capabilities, here are some practical considerations that may be relevant:
1. Define Scope and Structure
To decide what to monitor and who will hold responsibility provides the foundation for sustainable efforts. Many organisations form dedicated teams with cross-functional representation, which meet regularly to review developments and assess implications.
2. Establish Information Flows
Businesses can benefit from creating systems to capture, analyse and distribute relevant intelligence. This typically involves:
- Assign specific monitoring responsibilities across team members.
- Set appropriate review frequencies based on the pace of change in each area.
- Create channels to share insights with decision-makers.
3. External Engagement
Actions to enhance internal monitoring by incorporating external perspectives to deepen understanding can include to:
- Participate in industry groups and AI governance forums.
- Engage with specialist advisors and thought leaders.
- Review publications from law firms, consultancies, and academic institutions.
4. Measure Value and Impact
Being able to document instances where monitoring helps identify opportunities or avoid problems demonstrates value. When insights can inform strategic decisions or help organisations navigate regulatory changes effectively, these outcomes reinforce the importance of continued investment in forward-looking intelligence.
Looking Ahead
By systematically monitoring AI developments, legal teams can help their organisations anticipate technological, political, and regulatory shifts rather than merely react to them. This approach represents an important evolution in how legal professionals contribute to organisational resilience in a rapidly changing technological landscape.
In the next part of this series, we will examine specific regulatory developments related to AI and the consideration required to navigate this evolving governance environment.
Disclaimer: The sole purpose of this article is for information purposes and the contents of this document should not be considered as legal advice. The article is based on publicly available information and while care is taken in compiling this, no warranty, express or implied is given, nor does Deminor assume any liability for the use thereof.