.Make certain compatibility with various structures, including.NET 6.0,. Internet Framework 4.6.2, and.NET Criterion 2.0 as well as above.Minimize dependences to avoid model problems and also the necessity for binding redirects.Translating Audio Files.Among the primary capabilities of the SDK is actually audio transcription. Developers can translate audio reports asynchronously or even in real-time. Below is an instance of just how to translate an audio documents:.making use of AssemblyAI.making use of AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For nearby reports, similar code could be made use of to obtain transcription.await making use of var stream = brand new FileStream("./ nbc.mp3", FileMode.Open).var transcript = await client.Transcripts.TranscribeAsync(.stream,.brand-new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK likewise supports real-time audio transcription using Streaming Speech-to-Text. This attribute is actually particularly beneficial for uses requiring quick handling of audio information.using AssemblyAI.Realtime.wait for making use of var transcriber = new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for acquiring sound from a microphone for example.GetAudio( async (chunk) => wait for transcriber.SendAudioAsync( part)).await transcriber.CloseAsync().Making Use Of LeMUR for LLM Apps.The SDK incorporates with LeMUR to enable designers to create big foreign language design (LLM) applications on vocal records. Here is actually an instance:.var lemurTaskParams = new LemurTaskParams.Cause="Give a brief review of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Knowledge Versions.Also, the SDK includes built-in assistance for audio knowledge styles, allowing conviction analysis as well as other state-of-the-art functions.var transcript = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or even NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For additional information, explore the formal AssemblyAI blog.Image resource: Shutterstock.