AVIS: Adaptive VIdeo Simulation

This article is compiled as a guide for performing scalable video coding (SVC) simulation using AVIS, a framework for simulating adaptive scalable video simulation in network simulator 2. If the reader is interested in the design of AVIS, he is referred to our published paper :

Ghada G.Rizk, Ahmed H. Zahran, Mahmoud H. Ismail, “AVIS: An Adaptive VIdeo Simulation Framework for Scalable Video,” in Proc of the 8th International Conference on Next Generation Mobile Apps, Services and Technologies, Oxford, UK, 10-12 sep, 2014. 

If you plan to use AVIS, please cite the paper above. 

This article also includes our a summary for our experience with using open source tools for scalable video, specifically JSVM (Joint Scalable Video Model) and open SVC decoder


Video traffic has special requirements in terms of available bandwidth and encountered delay. Hence, video streaming is highly senetive to channel condition variations. The develpment of tools for evaluating video streaming performance has been the focus of many works in the literature. These tools are important because network-level performance metrics, such as packcet loss rate, available bandiwdth and delay jitter, may not provide an accurate metric for application performance, such as percentage of decodable frames, the quality of decodable frames, the rate of quality shiftes, MOS (Mean Opinion Scores)  and PSNR (Peak Signal Noise Ratio)

While AVIS is focused on the simulation of SVC-based adaptive video streaming, it is worth pointing out AVIS shares important steps with other simulation and experimental video performance evaluation frameworks such as EVALVID, SVEF, and myEVALSVC. These steps includes :

  1. Video encoding required to compress the video according to the target encoding scheme. 
  2. Video packet transmission which may be performed over experimental testbed as in SVEF or using a simulator as in EVALVID and myEVALSVC.
  3. Decoding the received packets to obtain the received video and generating application-level performance indices such as PSNR and MOS. 

These implementation of these steps can be captured in the following figure that represents the performance evaluation framework of SVEF and myEVALSVC. To this end, it is worth noting that myEVALSVC ports SVEF to enable the simulation of SVC video. 


To this end, it is worth pointing out that the aforementioned steps interact through using trace files that vary in their format depending on the video encoding and evaluation approach. In the next section, we focus on simulation of scalable video using AVIS. 

AVIS Setup 


 The following instructions have been tested in NS2 version 2.33. However, they should be easily applicable to any other version. 

  • Go to the common directory ns-2.33/common
  • Edit packet.h file by adding the following declarations (in Bold font) to hdr_cmn structure

int iface_; // receiving interface (label)

dir_t direction_; // direction: 0=none, 1=up, -1=down

int frametype_; // added by myEvalSVC

double sendtime_;

unsigned int pkt_id_;

unsigned int frame_pkt_id_;

uint8_t svc_lid;

uint8_t svc_tid;

uint8_t svc_qid;

uint16_t svc_frameno;

double svc_sendtime; // added by myEvalSVC

int AVIS_quality; // added by AVIS

// source routing

char src_rt_valid;

double ts_arr_; // Required by Marker of JOBS

  • Edit agent.h by add the following declarations (in Bold font) to class Agent private definition

int flags_; // for experiments (see ip.h)

int defttl_; // default ttl for outgoing pkts

int frametype_; // added by myEvalSVC

uint8_t svc_lid;

uint8_t svc_tid;

uint8_t svc_qid;

uint16_t svc_frameno;

double svc_sendtime; // added by myEvalSVC

int AVIS_quality; // added by AVIS

#ifdef notdef

int seqno_; /* current seqno */

int class_; /* class to place in packet header */


  • Edit Agent.h by adding the following lines (inBold font) to class Agent definition

inline nsaddr_t& daddr() { return dst_.addr_; }

inline nsaddr_t& dport() { return dst_.port_; }

void set_pkttype(packet_t pkttype) { type_ = pkttype; }

inline packet_t get_pkttype() { return type_; }

inline void set_frametype(int type) { frametype_ = type; }

inline void set_prio(int prio) { prio_ = prio; }

inline void set_lid(uint8_t lid) { svc_lid = lid; }

inline void set_tid(uint8_t tid) { svc_tid = tid; }

inline void set_qid(uint8_t qid) { svc_qid = qid; }

inline void set_frameno(uint16_t frameno) { svc_frameno = frameno; }

inline void set_sendtime(double time) { svc_sendtime = time ; }

inline void set_quality(int q) { AVIS_quality= q ; } // added by AVIS

  • Edit agent.cc file by adding the following line (in Bold font) to the constructor member initialization list of the Agent class

Agent::Agent(packet_t pkttype) :

size_(0), type_(pkttype),

frametype_(0), svc_lid(0), svc_tid(0), svc_qid(0), svc_frameno(0), svc_sendtime(0.0),

channel_(0), traceName_(NULL),

oldValueList_(NULL), app_(0), et_(0){}

  • Add the following lines (in Bold font) to the implementation of the function initpkt in the Agent class

void Agent::initpkt(Packet* p) const


ch->error() = 0; /* pkt not corrupt to start with */

ch->frametype_= frametype_;

ch->svc_lid = svc_lid;

ch->svc_tid = svc_tid;

ch->svc_qid = svc_qid;

ch->svc_frameno = svc_frameno;

ch->svc_sendtime = svc_sendtime;

ch->AVIS_quality =AVIS_quality; //added by AVIS

hdr_ip* iph = hdr_ip::access(p);

  •  Edit tcl/lib/ns-default.tcl by adding the following lines at the end of the file: 

#added by smallko (myEvalSVC)

Agent/UDP set packetSize_ 1000

Tracefile set debug_ 0

AVIS_RX set downloadRate_ 738.10

AVIS_RX set quality_ 19

AVIS_RX set algorithm 0

AVIS_RX set PlayoutStartTime 6

Application/Traffic/AVIS_Tx set downloadRate_otcl [AVIS_RX set downloadRate_]

Application/Traffic/AVIS_Tx set quality_otcl [AVIS_RX set quality_]

Application/Traffic/AVIS_Tx set CGS_otcl 0


    Edit NS2 Makefile.in by adding the text in bold below to the OBJ_CC list.

mcast/lms-sender.o \

queue/delayer.o \


AVIS/AVIS_sink.o AVIS/sortedList.o AVIS/AVIS_rx.o AVIS/AVIS_appBuffer.o \

xcp/xcpq.o xcp/xcp.o xcp/xcp-end-sys.o \

  • Recompile ns2 and install the new binary file to the system directory

$ ./configure

$ make clean

$ make



AVIS Simulation 

The following figure highlight the heart of SVC-based video streaming using AVIS. Our entry point is A at which the encoded video is used to generate trace and information files required for performing adaptive video simulation. Our exit point is B at which the generated traces from AVIS receiver are used to generate a decodable video file. In the following, we would highlight the main steps used in pre- and post-processing phase using JSVM for encoding and openSVC decoder for decoding the video. Our focus would be explaining the steps for implementing adaptive streaming algorithms in AVIS. 



Video Encoding

The encoding is performed using the only free available encoder in JSVM . In order to complete this step, you need to :

  1. have a working version of JSVM encoder 
  2. prepare the raw YUV video(s) to be encoded
  3. write down a set of configuration files to define the encoding parameters
  4. run the encoder to get the compressed h.264 video
    • put all the files in one folder and run the following command :
      $ H264AVCEncoderLibTestStatic  -pf  main.cfg  >  encoder_op.txt

 encoder_op.txt output should look like this.

Simulaition Preprocessing 

The objective of this stage is to prepare the files required by the simulator including:

  1.  a file contianing the encoded layer information such as layer number, scalability parameters,  and average bitrate
  2. trace file for the transmitted packets including (Time to send, Packet size , LId , Tid, Qid , and frame no.)


STEP 1:Use the JSVM BitStreamExtractor to generate the original NALU trace file (originaltrace.txt) and layer information file by running the following command:

$ BitStreamExtractorStatic  -pt  bitstreamExt_NALU.txt  encodedVideo.264 >  bitStreamExt_layerInfo.txt

The bitStreamExt_layerInfo.txt file would look like this.

and the bitstreamExt_NALU.txt file would look like this.


STEP 2: Use our prepare layerInfo awk scrip to generate the layer_info.txt 

$ awk  -f  layer.awk  bitStreamExt_layerInfo.txt  > avis_tx_layer_info.txt 

The avis_tx_layer_info.txt file should look like that. 


STEP 3: Prepare avis_tx_trace.txt  file. This step involves several steps as follows :

    • decode the video file to get information about the decoded NALU

 $ H264AVCDecoderLibTestStatic  encodedVideo.264   decodedVideo.yuv > decoderoutput.txt 

$ gcc  prepareAvisTxTrace  -o  prepareAvisTxTrace
$ ./prepareAvisTxTrace  decoderoutput.txt bitstreamExt_NALU.txt avis_tx_trace.txt  avis_NALU.txt  30

 avis_tx_trace.txt would look like that 

 avis_NALU.txt would look like that 

AVIS simulation 

We need to create several object to run the simulation including AVIS_TX and AVIS_RX and their related files required for input and output tracing purposes.

AVIS TCL Transmitter objects

We need to define two objects :

UDP object:

set max_fragmented_size 1480;

# 20: IP header length

set packetSize [expr$max_fragmented_size+20];

set src_udp1 [new Agent/AVIS_UDP];

$src_udp1 set_filename avis_Tx_pkts.txt; # file to record sent packet IDs

$src_udp1 set packetSize_ $packetSize;

$ns attach-agent  $n1 $src_udp1;

AVIS-Tx object:
    • Requires two I/P files: traffic trace file and layer info. file
    • Identify encoding type: CGS or MGS by setting variable CGS(0 : CGS, 1: MGS)

set avis_tx1 [new Application/Traffic/AVIS_Tx]; #create AVIS TX

$avis_tx1 attach-agent  $src_udp1; # Attach to UDP agent

$avis_tx1 tracefile avis_tx_trace.txt; #define trace file

$avis_tx1  layerfile avis_tx_layer_info.txt; #define layer info file

$avis_tx1 CGS 0; #To use dependency rule of MGS

 AVIS TCL Receiver objects

 We need to define two objects :

AVIS_sink object

set dst_udp1 [new Agent/AVIS_Sink] 

$ns attach-agent $n1 $dst_udp1

$ns connect $src_udp1 $dst_udp1

$dst_udp1 set_filename avis_rx_pkts.txt; #used to record all recived packets info. 

AVIS_RX object :

set  avis_Rx1 [new AVIS_RX]

$avis_Rx1 sortedListfile  avis_rx_notDeliveredFrames.txt; #file to record Received NALU/ frames info; at end of streaming all these NALUs should be bufferd and removed from this list

$avis_Rx1 PDRFile avis_rx_PLR; #file to record lost packets IDs and expected corresponding frame

$avis_Rx1 Buffer-filename  avis_rx_FramesNotBuffered; # file to record Buffered NALU/ frames info; at the end of streaming all these NALUs should be palyout and removed from

$avis_Rx1 Buffer-filename2 avis_rx_PlayoutFrames;# file to record played-out NALU/ frames info

$dst_udp1 attach-Rx  $avis_Rx1;

$avis_Rx1 set PlayoutStartTime 6; #initial value for buffer time to start palyout

$avis_Rx1 set algorithm 0; #defaultvalue 0 --> no adaptation

$avis_Rx1 TraceFile avis_rx_traces.txt; # main trace file


The main trace file includes:

Simulation time, number of recived frames at this time, number of buffered frames, Requested quality , Transmission quality, Transmission bit-rate corresponding to transmission quality , number of lost packets, PLR (Packet Loss Rate), number of lost frames , FLR (Frame Loss Rate),buffer time and Mode [Buffering only / playout / flushing].


Finally to Run simulation, typical start and stop commands are used as follows:

$ns at 0.0 "$avis_tx1 start" 

$ns at 0.0 "$avis_Rx1 connect-Tx $avis_tx1"; # Must be communicated to send control signals from RX to TX in receiver based strategies

$ns at 34.0"$avis_tx1 stop"

# to call destructor of linked list and free pointers (should called here)

$ns at 42.0"$avis_Rx1 Stop"

$ns at 42.0"finish"

#Run the simulation

$ns run 

  a simple simulation script can be found here. 

Developing adaptive streaming algorithms

You can develop adaptive streaming algorithm using c++ or tcl Interface.

C++ :

 In handle function of tracinghandler  at “AVIS_rx.cc” file you can write your algorithm. This function is called every specific duration ( say 0.5 sec) to record main trace file information. If  " rx->algorithm" variable equal to 0 " default value" , no adaptation done.The implementation of handle function is shown below: 

 void tracingHandler::handle(Event* e)  


 tracingEvent* trace = (tracingEvent*)e; 

  double bitRate; 

  double time=Scheduler::instance().clock(); 

  char Mode[20]; 



  else if(rx->buffer->displaying==1) 


  else if(rx->buffer->displaying==2) 

  strcpy(Mode,"Empty buffer"); 

 if (time==0.0) 





 if (rx->algorithm==0) 


  // when algorithm=0 this mean no adaptationdone  



 else if (rx->algorithm==1)  


// you can write your algorithm here




TCL Interface:

  • you can get value of Buufer time, PLR, or FLR at any time using tcl commands 

 $avis_Rx1 getPLR    

$avis_Rx1 getFLR

$avis_Rx1 get_Buffer_time

and use their values in your algortihm.

  •  You can adapt streaming  qaultiy and bit-rate at any instant using the following tcl command:

$avis_tx1 setParameters 625.6 9 

  • Also, you can pause and then play traffic at any instant:


$ns at 42.0 "$avis_tx1 Play"

$ns at 42.0 "$avis_tx1 Pause"


Post processing and Video decoding 

The decoding is performed using the OpenSVC Decoder due to limitation of JSVM Decoder in case of spatial scalability [see appendix A]. In order to complete this step, you need to  have a working version of OpenSVC Decoder.

STEP 1: Remove unfilled dependencies

The playout frames recorded in a trace file " avis_rx_PlayoutFrames " is then processed by "prepareReceivedTrace.cpp" to remove unfilled dependencies as follow:

$ g++  prepareReceivedTrace.cpp  -o  prepareReceivedTrace

$ ./ prepareReceivedTrace  MGS  avis_tx_trace.txt  avis_rx_PlayoutFrames.txt   avis_tx_layer_info.txt  LostFrames.txt  Filltered_Frames.txt  F_Q.txt 

This code take many inputs :

        • Encoding type (to use in dependency rule) : MGS or CGS
        • layer information file (to use in dependency rule): avis_tx_layer_info.txt
        • transmitted trace file ( to compare recieved frames' size with transmitted's; if they aren't the same, frame is considered as lost frame): avis_tx_trace.txt
        • Playout frames trace file : avis_rx_PlayoutFrames.txt

output :

        • Lost frames and even it's lost during transmission, late arrived (so, no use of it), or due to unfilled dependency :  LostFrames.txt  
        • Filltered frames inforamation ( time to be playout, size , layer information): Filltered_Frames.txt
        • Each frame and its playout quality after filltering : F_Q.txt

STEP 2:  Getting h.264 version of filltered recieved frames


The filltered playout frames recorded in a trace file " Filltered_Frames.txt " is then processed by "prepareFillteredTrace.cpp" to make trace file in formate of BitStreamExtractor's trace file to be able to get h.264 version of recieved video as follow:


$ g++  prepareFillteredTrace.cpp  -o  prepareFillteredTrace

$ ./ prepareFillteredTrace   Filltered_Frames.txt  avis_NALU.txt Filltered_Frames_trace.txt


STEP 3: Use OpenSVC Decoder
$ BitStreamExtractorStatic paris.264 filteredtrace.264 -et filteredtrace.txt > BitStream_op2.txt
$ mplayer -fps 30 filteredtrace.264 -vo yuv4mpeg:file='paris.y4m'
$ mencoder paris.y4m -ovc raw -of rawvideo -vf format=i420 -o final.yuv

STEP 4: PSNRcalculation using JSVM tool

$ PSNRStaticd 352 288 paris_cif.yuv final.yuv > psnr.txt


 Appendix A : Decoding tools

 The most famous avialble open source h.264 SVC decoders are: JSVM (Joint Scalable Video Model) and open SVC decoder. But both of them suffer from some bugs. Both of them cannot decode properly: corrupted or missing frames. This problem is pointed out in both frameworks SVEF and myEVALSVC. Hence, a NALU filter is used in SVEF to remove unfilled dependency. But, that NALU Filter code is written for case of temporal and quality scalability only. So, we generalized it to include spatial scalability also.

Also, JSVM decoder halts its operation when a non-discardable NALU loss is encountered. More surprisingly, the decoder behavior changes from scalability combination to another. Typically, a frame loss should only affect one or two GoPs and the rest of the video should be decoded. This behavior is encountered in temporal scalability only encoding case and decoder halts in all other encoding combination. And this catastrophic problem can not be solved as frame loss event happens frequently specially in wireless network.

OpenSVC decoder does not halt the decoding and always produces an output. But, OpenSVC decoder do not produce all the frames when spatial scalability is used and the lowest resolution is not using the highest frame rate. This problem can be avoided by using the same maximum frame rate for different reolutions. Another problem wit openSVC decoder is its failure to produce the frames of last two GoPs in the video. Hence, PSNR is calculated after exculding these last few frames. Alternatively, redundant GoPs may be added towards the end of the tested video.