WebRTC for ASP.NET developers

This post links some demos and tutorials for ASP.NET webforms and ASP.NET MVC developers to quick start and implement WebRTC in their applications.

First of all, please don't forget to check this post: I want to learn WebRTC!

Here is a simple one-to-one ASP.NET MVC based demo & source code:

This demo uses MS-SQL to store SDP and ICE messages and to sync the data among room participants. This demo is having following functionalities:
  1. Private/Public rooms creations
  2. Password protected rooms
  3. MS-SQL for signaling and presence detection
  4. One-to-One connections
  5. List of a public rooms, stats of each room, number of users in each room etc.

There is another XHR-based signaling demo and source code that can fit in any WebRTC application and demo:

This demo is having following features:
  1. It can be used in any WebRTC Experiment
  2. It uses MS-SQL for signaling
  3. It supports re-usability of the code
  4. It can be used with RTCMultiConnection.js and DataChannel.js
Here is how it can be used in RTCMultiConnection demos:
There is another implementation and source code that uses WebSync:
WebSync is an implementation of the Bayeux specification, commonly known as "comet", for the .NET framework and IIS. Ref
Try WebSync demos in action:
For those who wanna use SignalR instead:
P.S. This post will be updated for future ASP.NET based WebRTC demos e.g. SignalR demos.

WebRTC Tips & Tricks

This blog post is added for WebRTC newbies and beginners who wanna learn key-ideas; get code snippets and enjoy WebRTC life!

1. How to mute/unmute media streams?

Remember, mute/unmute isn't implemented as a default/native features in either media capturing draft i.e. getUserMedia API also in WebRTC draft i.e. RTCPeerConnection API.

Also, there is no "onmuted" and "onunmuted" event defined or fired in the WebRTC native implementations.

Usually, as per chromium team's suggestions, media-tracks are enabled/disabled to mute/unmute the streams.

Remember, "MediaStreamTrack.enabled=false" NEVER sends silence audio or blank/black video; it doesn't stop packet transmission. Although when you set "MediaStreamTracks.enabled=false", packets are devoid of meaningful data. A solution for this approach is to hold/unhold tracks from SDP and renegotiate the connections. See next section for more information.

MediaStream object is just a synchronous container. You shouldn't consider it an object interacting with the media source (audio/video input/output devices). That's why it was suggested to move "stop" method from "MediaStream" level to "MediaStreamTracks" level.

MediaStreamTracks are written to define input/output devices' kind. A single MediaStreamTrack object can contain multiple media devices.

MediaStreamTracks has "enabled" property as well as "stop" method. You can use "stop" method to leave relevant media sources — whether it is audio source or video one.

Mute/UnMute is usually happened by setting Boolean values for the "enabled" property per each MediaStreamTrack.

When a MediaStreamTrack is "enabled=true" it is unmuted; when a MediaStreamTrack is "enabled=false" it is muted.

var audioTracks = localMediaStream.getAudioTracks();
var videoTracks = localMediaStream.getVideoTracks();

// if MediaStream has reference to microphone
if (audioTracks[0]) {
    audioTracks[0].enabled = false;
}

// if MediaStream has reference to webcam
if (videoTracks[0]) {
    videoTracks[0].enabled = false;
}

Keep in mind that you're disabling a media track locally; it will not fire any event on target users side. If you disabled video track; then it will cause "blank-video" on target-users side.

You can manually fire events like "onmuted" or "onmediatrackdisabled" by using socket that was used for signaling. You can send/emit messages like:

yourSignalingSocket.send({
    isMediaStreamTrackDisabled: true,
    mediaStreamLabel: stream.label
});

Target users' "onmessage" handler can watch for "isMediaStreamTrackDisabled" Boolean and fire "onmediatrackdisabled" accordingly:

yourSignalingSocket.onmessage = function (event) {
    var data = event.data;

    if (data.isMediaStreamTrackDisabled == true) {
        emitEvent('mediatrackdisabled', true);
    } else emitEvent('mediatrackdisabled', false);
};

Last point; disabling "remote" media stream tracks are useless unless remote user disables his "local" media stream tracks. So, use signaling-socket, exchange messages between users to inform them enable/disable media tracks and fire relevant events that can be used to display video-posters or overlappers!

People usually asks about "video.pause()" and "video.muted=true". Remember, you're simply setting playback status of the local video element; it will NEVER synchronize changes to other peers until you listen for "video.onplay" and "video.onpause" handlers.

You can also listen for "video.onvolumechange" to synchronize volume among all users.

Remember, you can use WebRTC data channels or websocket or any other signaling mean to synchronize such statuses.

2. How to hold/unhold media calls?

Currently, WebRTC signaling mechanism is based on offer/answer model which in turns uses session-description protocol (SDP) to exchange metadata and key-session-requirements among peers.

SDP is formatted in a way that MediaStreamTracks are given unique media-line; and each media line can contain references to multiple similar media tracks.

Each media-line has "a=sendrecv" attribute; which is used to exchange incoming/outgoing media directions. You can easily replace "sendrecv" with "inactive" to make that media track on hold.

Don't forget that a single media-line can contain multiple media stream tracks; so if you're planning to hold all audio tracks; then search for audio media-line in the SDP; and replace "a=sendrecv" with "a=inactive".

Also, please keep in mind that "sendrecv" isn't the only value given to "a=" attribute. Possible values are:
  1. sendrecv ——— two-way media flow
  2. sendonly ——— one-way outgoing media flow
  3. recvonly ——— one-way incoming media flow
  4. inactive  ——— call on hold; i.e. no media flow

So, you need to replace first three values with "inactive" to make list of identical media tracks on hold; then you can replace "inactive" with previous value to leave the "hold" status.

After altering local-session-descriptions (peer.localDescription); you MUST renegotiate peer connections to make sure new changes are applied on both peers side.

Renegotiation is a process to recreate offer/answer descriptions and set remote descriptions again.

Remember, you're not creating new peer connections; you're using existing peer objects and invoking createOffer/createAnswer as well as setRemoteDescription again.

Renegotiation works only on chromium based browsers in the moment; so hold/unhold feature will work only on chrome/opera.

You can learn more about renegotiation here: https://www.webrtc-experiment.com/docs/how-to-switch-streams.html

3. How to check if a peer connection is established?

Simply set an event listener for "oniceconnectionstatechange" and check for "peer.iceConnectionState=='completed'".

ICE connection state flows like this:

(success) new => checking => connected => completed
(failure) new => checking => connected => disconnected => new => closed



Sometimes ICE connection gets dropped out of network loss or unexpected situations. In such cases, "ice-connection-state" is always changed to "disconnected".

There is another dangerous point is that if you're renegotiating peers; then ice-connection-state is always changed to "disconnected" then it restarts: "new => checking => connected => completed => disconnected => new => checking => connected => completed"..

var peer = new RTCPeerConnection(iceSerersArray, optionalArgsArray);
peer.oniceconnectionstatechange = function() {
   if(peer.iceConnectionState == 'completed') {
      var message = 'WebRTC RTP ports are connected to UDP. ';
      message += 'Wait a few seconds for remote stream to be started flowing.';
      alert(message);
   }
};

4. How to check if a peer connection is closed or dropped?

Again, watch for "oniceconnectionstatechange" event handler and check for "peer.iceConnectionState=='disconnected'".

ICE Agent changes relevant candidates' state to "disconnected" in following cases:
  1. If peer connection is closed.
  2. If you're renegotiating media.....in this case, previous peer connection is closed and re-established again.

var peer = new RTCPeerConnection(iceSerersArray, optionalArgsArray);
peer.oniceconnectionstatechange = function() {
   if(peer.iceConnectionState == 'disconnected') {
      var message = 'WebRTC RTP ports are closed. ';
      message += 'UDP connection is dropped.';
      alert(message);
   }
};

5. How to check if browser has microphone or webcam?

There is a JavaScript library named as "DetectRTC.js"; which uses "getSources/getMediaDevices" API to fetch list of all audio/video input devices.

If there is no audio input device; then it says that:

Your browser has NO microphone attached or you clicked deny button sometime and browser is still denying the webpage to access relevant media.

CheckDeviceSupport(function (chrome) {
    if (chrome.hasMicrophone) {}

    if (chrome.hasWebcam) {}
});

// above function is defined here
function CheckDeviceSupport(callback) {
    var Chrome = {};

    // This method is useful only for Chrome!

    // "navigator.getMediaDevices" will be next!
    // "MediaStreamTrack.getSources" will be removed.

    // 1st step: verify "MediaStreamTrack" support.
    if (!window.MediaStreamTrack) return;

    // 2nd step: verify "getSources" support which is planned to be removed soon!
    // "getSources" will be replaced with "getMediaDevices"
    if (!MediaStreamTrack.getSources) {
        MediaStreamTrack.getSources = MediaStreamTrack.getMediaDevices;
    }

    // if still no "getSources"; it MUST be firefox!
    if (!MediaStreamTrack.getSources) {
        // assuming that it is older chrome or chromium implementation
        if (!!navigator.webkitGetUserMedia) {
            Chrome.hasMicrophone = true;
            Chrome.hasWebcam = true;
        }

        return;
    }

    // loop over all audio/video input/output devices
    MediaStreamTrack.getSources(function (sources) {
        var result = {};

        for (var i = 0; i < sources.length; i++) {
            result[sources[i].kind] = true;
        }

        Chrome.hasMicrophone = result.audio;
        Chrome.hasWebcam = result.video;
        
        callback(Chrome);
    });
}


6. How to fix echo/noise issues?

According to this page:
Echo is a distortion of voice that occurs either when you place input/output audio devices closed together; or audio output level is too high or CPU usage exceeded by other applications or by the same application.

Firefox 29 and upper builds has a nice echo cancellation. Echo is also improved in chrome 34 and upper builds.

Make sure that your speaker's kHz values matches with your application's kHz values. Mismatch will lead to echo.

If you're using Mac OSX; then you can easily recover echo issues by enabling "ambient noise reduction". You can search the Google engine for how to enable for build-in audio devices on Mac.

It is possible to watch audio RTP-packets' audio level using getStats API; however there is no API other than Media Processing API that allows setting volume from JavaScript applications. Media Processing draft isn't standardized yet and AFAIK, none of the browser vendors implemented volume-specific API. Though, Gecko team implemented captureStreamUntilEnded API in their Firefox product from the same draft.

Make sure that you're not using WebAudio API along with getUserMedia or RTCPeerConnection API; because sometimes you accidentally or intentionally connect input node with output node which obviously causes huge echo.

Developers were suggesting headphones/microphones to overcome echo issues however it is appeared that such solutions doesn't matters. One must check all relevant conditions to make sure there is nothing causing noise.

7. How to check estimated bandwidth?

Sometimes it is known as "available bandwidth" or "bandwidth consumed". You can setup a listener over getStats API to check bytesSent per second. Then you can easily find-out bandwidth that your application is currently consuming.

Remember, you're checking bandwidth in one-to-one scenario; you need to follow some deeper-level tricks to find estimated bandwidth for multi-peers scenarios.

Following example assumes that you opened audio-only connection between two users; and you're checking bandwidth consumed by outgoing RTP ports.

function getStats(peer) {
    _getStats(peer, function (results) {
        for (var i = 0; i < results.length; ++i) {
            var res = results[i];

            if (res.googCodecName == 'opus') {
                if (!window.prevBytesSent) 
                    window.prevBytesSent = res.bytesSent;

                var bytes = res.bytesSent - window.prevBytesSent;
                window.prevBytesSent = res.bytesSent;

                var kilobytes = bytes / 1024;
                console.log(kilobytes.toFixed(1) + ' kbits/s');
            }
        }

        setTimeout(function () {
            getStats(peer);
        }, 1000);
    });
}

// a wrapper around getStats which hides the differences (where possible)
// following code-snippet is taken from somewhere on the github
function _getStats(peer, cb) {
    if (!!navigator.mozGetUserMedia) {
        peer.getStats(
            function (res) {
                var items = [];
                res.forEach(function (result) {
                    items.push(result);
                });
                cb(items);
            },
            cb
        );
    } else {
        peer.getStats(function (res) {
            var items = [];
            res.result().forEach(function (result) {
                var item = {};
                result.names().forEach(function (name) {
                    item[name] = result.stat(name);
                });
                item.id = result.id;
                item.type = result.type;
                item.timestamp = result.timestamp;
                items.push(item);
            });
            cb(items);
        });
    }
};

You can use it like this:

peer.onaddstream = function(event) {
   getStats(peer);
};

8. How to listen Audio/Video elements native events?

You can easily override "onpause", "onplay", and "onvolumechange" events.

htmlVideoElement.onplay = function () {
    // this event is fired each time when you playback the video
    // via play() method or muted=false or paused=false
};

htmlVideoElement.onpause = function () {
    // this event is fired each time when you pause/stop playback
    // via pause() or muted=true or paused=true or stop()
};

htmlVideoElement.onvolumechange = function () {
    // htmlVideoElement.volume
};

9. How to get list of all audio/video input devices?

Note: Chromium team is implementing "navigator.getMediaDevices" interface which will allow you prefetch both input and output devices.

Following code snippet uses "MediaStreamTrack.getSources" to fetch-out all input audio/video devices.

function getInputDevices(callback) {
    // This method is useful only for Chrome!

    var devicesFetched = {};

    // 1st step: verify "MediaStreamTrack" support.
    if (!window.MediaStreamTrack && !navigator.getMediaDevices) {
        return callback(devicesFetched);
    }

    if (!window.MediaStreamTrack && navigator.getMediaDevices) {
        window.MediaStreamTrack = {};
    }

    // 2nd step: verify "getSources" supported which is planned to be removed soon!
    // "getSources" will be replaced with "getMediaDevices"
    if (!MediaStreamTrack.getSources) {
        MediaStreamTrack.getSources = MediaStreamTrack.getMediaDevices;
    }

    // todo: need to verify if this trick works
    // via: https://code.google.com/p/chromium/issues/detail?id=338511
    if (!MediaStreamTrack.getSources && navigator.getMediaDevices) {
        MediaStreamTrack.getSources = navigator.getMediaDevices.bind(navigator);
    }

    // if still no "getSources"; it MUST be firefox!
    // or otherwise, it will be older chrome
    if (!MediaStreamTrack.getSources) {
        return callback(devicesFetched);
    }

    // loop over all audio/video input/output devices
    MediaStreamTrack.getSources(function (media_sources) {
        var sources = [];
        for (var i = 0; i < media_sources.length; i++) {
            sources.push(media_sources[i]);
        }

        getAllUserMedias(sources);

        if (callback) callback(devicesFetched);
    });

    var index = 0;

    function getAllUserMedias(media_sources) {
        var media_source = media_sources[index];
        if (!media_source) return;

        // to prevent duplicated devices to be fetched.
        if (devicesFetched[media_source.id]) {
            index++;
            return getAllUserMedias(media_sources);
        }
      
        devicesFetched[media_source.id] = media_source;

        index++;
        getAllUserMedias(media_sources);
    }
}

You can use it like this:

getInputDevices(function (devices) {
    for (var device in devices) {
        device = devices[device];

        // device.kind == 'audio' || 'video'
        console.log(device.id, device.label);
    }
});

10. How to choose STUN or TURN and skip Host/Local candidates?

Currently it is not possible for chrome to fetch only STUN or TURN candidates.

A quick workaround is to skip calling "addIceCandidate" for candidates you wanna skip:

var host      = false;
var reflexive = false;
var relay     = true;

peer.onicecandidate = function(e) {
     var ice = e.candidate;
     if(!ice) return;
  
     if(host && ice.candidate.indexOf('typ host ') == -1) return;
     if(reflexive && ice.candidate.indexOf('typ srflx ') == -1) return;
     if(relay && ice.candidate.indexOf('typ relay ') == -1) return;
  
     POST_to_Other_Peer(ice);
};

Above code snippet is taken from this link.

Updated at Jan 22, 2016

You can check tons of other WebRTC tips & tricks at webrtc-pedia page:

https://www.webrtc-experiment.com/webrtcpedia/