Amazon Transcribe Live Call Analytics with Agent Assist, showcased in the GitHub project demo recording, is a great example of the types of solutions that are possible with this virtual participant framework.” “With recent advancements in generative AI, natural language processing (NLP), and computer vision, we see tremendous opportunity for startup, enterprise, and public sector developers to build agent assist, live translation, visual content moderation, identity verification, and AI-assisted collaboration and productivity applications. “This translates to reduced costs of running fleets of virtual participants with containerization and serverless architecture.” It also helps to reduce operational burden by standardizing meeting participant live-media access in the cloud,” said Sina Sojoodi, Principal Solutions Architect at AWS. “The AWS Virtual Participant Framework for RTC removes undifferentiated heavy lifting in building custom integrations between Zoom and AWS. “This sample solution combines real-time communication capabilities of the Zoom Meeting SDK with AWS AI services (Amazon Transcribe), serverless computing (AWS Lambda and Fargate), and media streaming (Amazon Kinesis Video Stream) for developers to build meaningful experiences for end users. Then over at the Zoom App Marketplace create a developer account if you don’t have one already, then follow the README file instructions from the GitHub repo. If you would like to get started using the new framework you can set up a test account in AWS, ideally with administrative privilege says Zoom. Zoom has this week announced that the AWS Virtual Participant Framework for RTC open-source projects is now available for users, businesses and developers and is available to download from GitHub.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |