Suvoraj Biswas, Architect at Ameriprise Financial Services — Generative AI Framework, Compliance, AI at Scale, Cloud Convergence, DevSecOps, AI Governance, Global Regulations, and Future AI Trends

2 months ago 27

In this insightful interview, we speak with Suvoraj Biswas, an Architect at Ameriprise Financial Services, a Fortune 500 financial giant with over 130 years of history. Suvoraj offers a wealth of knowledge on the evolving role of Generative AI in enterprise IT, particularly within highly regulated industries like finance. From strategies for large-scale AI deployment to navigating security and compliance challenges, Suvoraj shares critical insights on how businesses can leverage AI responsibly and effectively. Readers will also learn about the future convergence of cloud technologies, DevSecOps, and AI, alongside emerging trends that could reshape enterprise architecture.

Suvoraj, as a pioneer in the field of Generative AI, what inspired you to write your award-winning book on the “Enterprise GENERATIVE AI Well-Architected Framework & Patterns”? Can you share any key takeaways from your research that you believe every enterprise should know? 

As a Solutions Architect, I faced many challenges when I first started working with Generative AI. These experiences motivated me to write “Enterprise Generative AI Well-Architected Framework & Patterns.” I saw that as more businesses adopt AI, there is a growing need for scalable and reliable architectures and knowledge of proven patterns that make integrating large language models (LLMs) easier while ensuring long-term success. One key takeaway from my research is that enterprises should focus on building a flexible yet secure IT architecture that accommodates the evolving nature of Generative AI alongside their business objectives.
Focusing on data governance, privacy, and ethical AI practices is essential for ensuring both scalability and trust among all levels of stakeholders in the organization. Also, aligning Generative AI use cases with business objectives helps maximize its value and ensures a seamless adoption process across diverse enterprise landscapes. 

With your extensive experience in both architecture and governance, how do you approach the challenges of ensuring compliance and security when adopting Generative AI within large financial institutions? 

With my background in both architecture and governance, I approach the challenges of ensuring compliance and security in Generative AI by emphasizing a well-architected framework. In my book, I outlined an Enterprise Generative AI Framework that integrates into the existing enterprise architecture, offering a standardized approach to address these concerns. This framework will not only assist Financial institutions but any enterprises to adopt Generative AI securely. This framework is built around essential building blocks and pillars designed to help financial institutions adopt Generative AI while managing risk. It includes proven patterns that ensure regulatory compliance and secure handling of sensitive data, which are crucial for large financial institutions.


By following this methodology, companies can mitigate both business and technical challenges, ensuring that Generative AI is not only scalable and effective but also safe and compliant with industry regulations. One of the key pillars I emphasize is embedding security and governance within the Generative AI architecture itself.
By incorporating compliance checks at every stage—whether during data ingestion, building vector-based knowledge bases, or at the time of retrieval using popular RAG (Retrieval Augmented Generation) pattern, model training, or deployment—the framework ensures that financial institutions, as well as any regulated industry, can adhere to strict regulatory requirements while still leveraging the power of Generative AI. 

Generative AI is often seen as a transformative tool, but also a complex one to implement at scale. What strategies do you recommend for organizations looking to integrate Generative AI while maintaining a balance between innovation and risk management? 

In my experience, having a scalable Enterprise Architecture and collaboration between Enterprise Architects and the engineering team is extremely important to implement Generative AI at scale while maintaining the required balance. There are different strategies or combinations of strategies Enterprise leaders (CXOs – CTOs or CIOs) can undertake before rushing to adopt the Generative AI a company’s ecosystem:


– a) Align all Generative AI projects with the organization’s core business objectives – This important strategy ensures that the AI solutions deliver real value, whether by enhancing customer experiences, improving operations, or driving new revenue streams. At the same time, it’s essential to build flexibility into the architecture, allowing the organization to scale AI systems as the business grows and new technologies emerge.
b) Prioritize governance, compliance, and security from the start – This includes ensuring data privacy, implementing ethical AI practices, and closely following industry regulations, especially in highly regulated sectors like finance, and healthcare. Organizations can mitigate risks while driving innovation, by embedding compliance and security into the system architecture.
c) Cross-functional team collaboration- This strategy involving cross-functional teams within the organization for Generative AI success, including legal, compliance, and other business stakeholders, ensures a holistic approach to risk management and buy-in from everyone. This helps in creating a system that supports innovation while safeguarding the organization from potential risks, making the adoption of Generative AI both successful, scalable, and secure. 

You’ve been involved in numerous large-scale digital transformation projects. How do you see the role of Generative AI evolving in shaping the future of enterprise IT architectures, particularly within the financial sector? 

No doubt, Generative AI is going to play a key role in curating the future of enterprise IT architectures in all sectors, especially within the financial or healthcare sector. From my experience with large-scale digital transformation projects, I see Generative AI would be driving automation, enhancing decision-making, and improving the digital experiences of customers by generating and processing large amounts of data efficiently. In the financial sector, where security, compliance, and data privacy are critical, Generative AI can help streamline operations while maintaining strict regulatory standards. Financial organizations can unlock new ways to optimize processes, personalize services, and even detect fraud more effectively, by integrating Generative AI into enterprise IT architectures.

However, it’s essential to balance innovation with a strong focus on risk management, which ensures that the AI systems are both scalable and secure. As Generative AI continues to evolve, it will become a foundational component of modern enterprise IT strategies, enabling financial institutions to stay competitive, innovate faster, and deliver more value to their customers. 

As an architect who has worked with cloud adoption, SaaS platform engineering, and multi-cloud strategies, how do you envision the convergence of cloud technologies and AI driving future enterprise systems?

 As an architect, I have gained professional experience in cloud adoption, SaaS platform engineering, and multi-cloud strategies. Based on my earlier experiences, I see the convergence of cloud technologies and Generative AI transforming enterprise systems by boosting flexibility, scalability, and innovation together. Cloud platforms will provide the ideal infrastructure for running Generative AI models at scale, which require significant computing power. Enterprises can run these models more cost-effectively, by utilizing the cloud-based GPUs, as it reduces the total cost of ownership (TCO) compared to maintaining the on-premise infrastructure. This shift makes it easier for businesses to scale their AI solutions without heavy upfront investment.

Generative AI, particularly large language models, is highly scalable when deployed in a multi-cloud platform. For example, using services like Amazon Bedrock, enterprises can easily integrate and consume popular open-source foundation models as well as proprietary models from innovative companies (AI21 Labs, Anthropic, Stability AI) without needing to manage complex infrastructure. This allows organizations to seamlessly leverage Generative AI for a variety of use cases, from customer support to personalized experiences, while maintaining control over security, privacy, and compliance. By combining Generative AI with cloud technology, enterprises can accelerate innovation, streamline operations, and gain deeper insights, all while minimizing costs and improving overall efficiency. This convergence will be a key driver of the future of enterprise IT systems. 

Given your background in DevOps and DevSecOps, what role do you think these methodologies will play in the deployment and governance of AI systems? Are there specific best practices that can help streamline this process? 

In my view, DevOps and DevSecOps play a vital role in the deployment and governance of AI systems. They ensure that AI models are delivered efficiently and securely through automation and continuous monitoring. Organizations can integrate AI into enterprise environments more smoothly by automating deployments and embedding security from the start in the build and the deployment pipeline. One important aspect is the governance of AI-generated content. For better compliance, it’s essential to move AI-generated data into secure vaults like Microsoft Purview, Jatheon, Bloomberg Vault, or Global Relay products.

These solutions provide secure storage and ensure that the content is protected and managed by regulations, especially in industries with strict compliance requirements. Following a DevSecOps practice during your Generative AI development will ensure you are safeguarded from future surprises as part of the regulatory audit. Another key practice is incorporating synthetic data generated by Generative AI into the DevOps pipeline. This generated synthetic data can help the teams to perform more effective smoke and integration testing, simulating complex real-world scenarios before launching the products or features in production. This helps identify potential issues early on, making the overall testing process more robust and efficient. The pairing of AI content governance with DevOps and DevSecOps methodologies supports the organizations to not only accelerate deployments and improve security but also enhance testing processes which leads to a more scalable and compliant AI infrastructure. 

AI governance is a topic you’re passionate about. In your opinion, what are the most critical governance issues that organizations must address to safely deploy Generative AI at scale, particularly in highly regulated industries like finance? 

I’m really passionate about AI and corresponding data governance, especially when it comes to deploying Generative AI at scale in highly regulated industries like finance, healthcare as well as retail or supply chain. One of the most critical governance issues organizations must address is data privacy. It’s essential to ensure that any data used to train AI models complies with regulations and sensitive information must be protected at all times. The dataset that is being used to fine-tune the Large Language Models should go through internal audit and buy-in from the internal stakeholders and should be sanitized and cleaned before being used. It should also have the required tags and labels. Another important issue is content governance. Organizations should implement processes to move AI-generated content into secure storage solutions like Microsoft Purview or Bloomberg Vault. This not only safeguards the data but also helps maintain compliance with industry standards. Also, data and architecture transparency is vital to any organization’s internal and external stakeholders. Organizations need to be clear about how the AI models make decisions and ensure that stakeholders understand the implications of using AI by enforcing explainable AI as part of the enterprise process and culture. This is particularly important in finance, where decisions can significantly impact customers and the market.

Finally, integrating synthetic data into the development and testing processes can enhance the scalability and robustness of the applications and the products. By using this data for smoke and integration testing, organizations can simulate complex scenarios and identify potential issues before they arise in real-world applications. Overall, by addressing these governance issues, organizations can safely and effectively deploy Generative AI while minimizing risks and ensuring the reliability of the systems and the surrounding enterprise architecture which will increase overall customer trust and satisfaction. 

You have worked in various geographies, including India, the United States, and Canada. How do you think regional regulations and attitudes toward AI and automation differ, and how does this impact your approach to AI architecture in different markets? 

Having worked in India, the United States, and Canada, personally I’ve noticed distinct differences in regional regulations and attitudes toward AI and automation. In the United States, there’s a strong focus on innovation and rapid adoption, but also significant scrutiny regarding data privacy and ethical use. Canada tends to emphasize transparency and inclusivity in AI governance, while India is increasingly embracing AI but faces challenges with regulatory frameworks and infrastructure. These differences impact my approach to AI architecture by necessitating tailored solutions for each market. In the U.S., I would recommend prioritizing compliance with stringent data regulations and focusing on scalable, innovative architectures. In Canada, I would recommend emphasizing transparency and ethical practices, ensuring that AI solutions align with local values. In India, I would suggest considering the need for cost-effective and adaptable solutions that can work within evolving regulatory environments. This regional awareness helps me to create scalable Generative AI architectures that are not only effective but also compliant and culturally sensitive. 

In your experience, what are some common misconceptions enterprises have about Generative AI, and how do you work to dispel these myths in your role as an architect and thought leader? 

In my experience, some common misconceptions enterprises have about Generative AI include thinking it can completely replace human intelligence and their ability in the decision-making process and believing it always requires vast amounts of historical data to work effectively. Many also assume that once an AI model is deployed, it doesn’t need ongoing monitoring or updates. Some of the organizations also believe Generative AI is extremely costly and requires complex infrastructure to run and do the inference. To address these myths, I focus on education and clear communication. In my book, I explained that Generative AI is a tool that enhances human capabilities, not a replacement as well as it helps in a better decision-making process not influence it. I also highlight that while larger datasets can improve performance, high-quality smaller datasets can still be effective. Also, I would emphasize the need for continuous monitoring and refinement of AI models after deployment by integrating an observability layer on the model’s performance and the data being generated by it. By sharing best practices and real-world examples, I help enterprises understand the potential and limitations of Generative AI, enabling them to make informed decisions for successful AI projects. 

Finally, looking ahead, what excites you the most about the future of Generative AI in enterprise applications? Are there any emerging trends or technologies that you believe will play a pivotal role in its next phase of development? 

What excites me most about the future of Generative AI in enterprise applications is its potential to drive innovation and efficiency. Emerging trends, such as the integration of Generative AI with edge computing and IoT, will enable real-time data processing and smarter automation, allowing businesses to respond quickly to changes. Also, the focus on ethical AI and responsible usage will lead to advancements in governance frameworks that ensure responsible deployment, and better observability. The rise of synthetic data generation will also be crucial, as it allows organizations to create high-quality data for training and testing AI models, this helps overcome data limitations and enhance performance. Together, these developments promise to reshape enterprise applications and make Generative AI an even more powerful tool for growth and innovation.

Read Entire Article