I'm trying to process audio through Google's API and get the language being spoken. I don't need the actual transcript. I'm using flutter, and I've looked at various documentations and the questions on the site.

Apparently, the original v1 API doesn't seem to be supporting the detecting language function. And so I've looked at the v2beta API which seems to be supporting the detecting language function with supporting option for alternative language code according to here.

However, the only examples that I can get my hand on were original v1 APIs and apparently some functions are a bit different from the beta API.

Below is the code that I have put together using the original API documentation and the answers on StackOverflow. This doesn't work with the beta API.

import 'package:flutter/material.dart';
import 'package:googleapis/speech/v1.dart';
import 'package:googleapis_auth/auth_io.dart';

final _credentials = new ServiceAccountCredentials.fromJson(r'''

const _SCOPES = const [SpeechApi.CloudPlatformScope];

void speechToText() {
  clientViaServiceAccount(_credentials, _SCOPES).then((http_client) {
    var speech = new SpeechApi(http_client);

final _json = {
  "audio": {
  "config": {
    "encoding": "LINEAR16",
    "sampleRateHertz": 16000,
    "languageCode": "fr-FR",
    "alternativeLanguageCodes": ["en-US", "ko-KR"],
final _recognizeRequest = RecognizeRequest.fromJson(_json);
speech.speech.recognize(_recognizeRequest).then((response) {

The problems are the following:

  1. Original API doesn't support config option for "alternativeLanguageCodes", and therefore doesn't seem to support detecting language.

  2. Beta version of API seems to function differently to the original API, and I only could find the examples for the original API.

  3. I have looked at the beta API itself, and have been spending the last hour looking at the same stuff, but still couldn't figure out how to make them work.

Would anyone please be able to help me? Thank you!