top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Andorid JB 4.2.2 OMX Input Buffer Size Issue

+2 votes
1,347 views

I have an Open-MAX component which can decoder AVC/H264 streams. The component works fine for .MP4 clips and I am able to play without issues. Now when I switch to .ts clips (which are .h264/AVC with AAC audio (because android only support that)), I see that the input buffer size is never sufficient to push the data in to the hardware.

By default I have a buffer size of 32kb which is later increased to 64kb (by SetParameter Call). I see the failure in this case.

Then I change the buffer size to 256 kb then this size is retained and not changed by setParameter call. I see the above issue with 256kb input buffer size. Even in this case I see the failure (attached log below).

I get the following error :

I/ATSParser( 2000): resizing buffer to 262144 bytes 
I/ATSParser( 2000): resizing buffer to 327680 bytes 
E/OMXCodec( 2000): [OMX.BCM.Video.decoder] Codec's input buffers are too small to accomodate buffer read from source (info->mSize = 262144, srcLength = 269076) 
E/MediaPlayer( 3598): error (1, -**********) 
E/MediaPlayer( 3598): Error (1,-**********) 
D/VideoView( 3598): Error: 1,-********** 

Any input -

posted Dec 6, 2013 by Majula Joshi

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

+1 vote

Looks like a bug in Android code -

ATS parser is increasing the size after it has set the negotiated size on the input port of the OMX codec.
Obviously there is no way it will be able to find buffer big enough.

answer Dec 6, 2013 by anonymous
Similar Questions
0 votes

According to CDD document, it is stated below"Device implementations MUST have at least 340MB of memory available to the kernel and userspace. The 340MB MUST be in addition to any memory dedicated to hardware components such as radio, video, and so on that is not under the kernel's control."

MY understanding:
340 MB is required for both user and kernel space. 340 MB is for hardware components , So, in total minimum 680 MB (Please correct me if my understanding is wrong ?)

But, if i look at mobiles that are available in the market, they are not compliance with the requirement. (HTC ONE V , has 512 MB of RAM, but still runs android 4.0.3 , in which the requirement is 680MB.)

Considering above, how the OEM vendor passes compatibility test?

+1 vote

Are there any patches/example on how to enable USB bluetooth support with JB 4.2.2 (bluedroid)?

+1 vote

I have a job running very slowly, when I examine cluster, I find my hdfs user using 170m swap though top command, thats user run datanode daemon, ps output show following info, there are two -Xmx value, and i do not know which value is the real ,1000m or 10240m

# ps -ef|grep 2853
root      2095  1937  0 15:06 pts/4    00:00:00 grep 2853
hdfs      2853     1  5 Nov07 ?        1-22:34:22 /usr/java/jdk1.7.0_45/bin/java -Dproc_datanode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ch14.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Xmx10240m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/var/log/hadoop-hdfs/gc-ch14-datanode.log -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
+1 vote

We are currently facing a frustrating hadoop streaming memory problem. our setup:

  • our compute nodes have about 7 GB OF RAM
  • hadoop streaming starts a bash script wich uses about 4 GB OF RAM
  • therefore it is only possible to start one and only ONE TASK PER NODE

out of the box each hadoop instance starts about 7 hadoop containers with default hadoop settings. each hadoop task forks a bash script that need about 4 GB of RAM, the first fork works, all following fail because THEY RUN OUT OF MEMORY. so what we are looking for is to LIMIT the number of containers TO ONLY ONE. so what we found on the internet:

  • yarn.scheduler.maximum-allocation-mb and mapreduce.map.memory.mb is set to values such that there is at most one container. this means, mapreduce.map.memory.mb must be MORE THAN HALF of the maximum memory (otherwise there will be multiple containers).

done right, this gives us one container per node. but it produces a new problem: since our java process is now using at least half of the max memory, our child (bash) process we fork will INHERIT THE PARENT MEMORY FOOTPRINT and since the memory used by our parent was more than half of total memory, WE RUN OUT OF MEMORY AGAIN. if we lower the map memory, hadoop will allocate 2 containers per node, which will run out of memory too.

since this problem is a blocker in our current project we are evaluating adapting the source code to solve this issue. as a last resort. any ideas on this are very much welcome.

+1 vote

I'm trying to random access some bytes in a Huge File(>4GB) in Android platform in a C code. However, fseek & ftell have the int limitations. Googling gave the options of fseeko & ftello with the compiler flag -D_FILE_OFFSET_BITS=64, but that it doesnt seem to work. So is it possible and is there a solution for the problem.

...