trx_toolkit/data_msg.py: use list comprehension for bit conversion
This approach is much better than buf.append() in terms of performance.
Consider the following bit conversion benchmark code:
usbits = [random.randint(0, 254) for i in range(GSM_BURST_LEN)]
ubits = [int(b > 128) for b in usbits]
for i in range(100000):
sbits = DATAMSG.usbit2sbit(usbits)
assert(DATAMSG.sbit2usbit(sbits) == usbits)
sbits = DATAMSG.ubit2sbit(ubits)
assert(DATAMSG.sbit2ubit(sbits) == ubits)
=== Before this patch:
59603795 function calls (
59603761 primitive calls) in 11.357 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
59200093 3.389 0.000 3.389 0.000 {method 'append' of 'list' objects}
100000 2.212 0.000 3.062 0.000 data_msg.py:191(usbit2sbit)
100000 1.920 0.000 2.762 0.000 data_msg.py:214(sbit2ubit)
100000 1.835 0.000 2.677 0.000 data_msg.py:204(sbit2usbit)
100000 1.760 0.000 2.613 0.000 data_msg.py:224(ubit2sbit)
=== After this patch:
803794 function calls (803760 primitive calls) in 3.547 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
100000 1.284 0.000 1.284 0.000 data_msg.py:203(<listcomp>)
100000 0.864 0.000 0.864 0.000 data_msg.py:193(<listcomp>)
100000 0.523 0.000 0.523 0.000 data_msg.py:198(<listcomp>)
100000 0.500 0.000 0.500 0.000 data_msg.py:208(<listcomp>)
1 0.237 0.237 3.547 3.547 data_msg.py:25(<module>)
100000 0.035 0.000 0.899 0.000 data_msg.py:191(usbit2sbit)
100000 0.035 0.000 0.558 0.000 data_msg.py:196(sbit2usbit)
100000 0.033 0.000 0.533 0.000 data_msg.py:206(ubit2sbit)
100000 0.033 0.000 1.317 0.000 data_msg.py:201(sbit2ubit)
So the new implementation is ~70% faster in this case, and takes
significantly less function calls according to cProfile [1].
[1] https://docs.python.org/3.8/library/profile.html
Change-Id: I01c07160064c8107e5db7d913ac6dec6fc419945