fix for #1934
This fixes a race condition between Stack thread and DL
PDU processing that lead to updates of the RLC buffer that
are undetected by the BSR routine.
What happens is that in a UL SCH PDU all outstanding data is transmitted
and and a LBSR with all zero buffers is sent.
14:39:47.327301 [MAC ] [D] [ 3793] BSR: LCID=3 old_buffer=59
14:39:47.330600 [MAC ] [I] [ 3793] UL LCID=3 len=58 LBSR: b=0 0 0 0
Note that "old_buffer" isn't set to zero here.
At the same time (same TTI), the MAC PDU processing thread handles DL-SCH PDUs
that may generate new UL PDUs:
14:39:47.330749 [RLC ] [I] DRB1 Tx SDU (54 B, tx_sdu_queue_len=1)
14:39:47.330762 [RLC ] [I] DRB1 Tx SDU (54 B, tx_sdu_queue_len=2)
14:39:47.330775 [RLC ] [I] DRB1 Tx SDU (54 B, tx_sdu_queue_len=3)
..
Those PDUs are "new data" since the previous buffer state was zero.
Here is the race now between the threads, at the end of the bsr::step() function
old_buffer of each LCG is updated with the previous new_buffer, so
the buffer state of LCG=2 is now 59.
Now MAC starts the next TTI:
14:39:47.331910 [MAC ] [D] [ 3794] Running MAC tti=3794
14:39:47.331928 [MAC ] [D] [ 3794] Update Bj: lcid=0, Bj=0
14:39:47.331934 [MAC ] [D] [ 3794] Update Bj: lcid=1, Bj=0
14:39:47.331938 [MAC ] [D] [ 3794] Update Bj: lcid=2, Bj=0
14:39:47.331941 [MAC ] [D] [ 3794] Update Bj: lcid=3, Bj=-1752
14:39:47.331951 [MAC ] [D] [ 3794] BSR: LCID=0 update new buffer=0
14:39:47.331960 [MAC ] [D] [ 3794] BSR: LCID=1 update new buffer=0
14:39:47.331964 [MAC ] [D] [ 3794] BSR: LCID=2 update new buffer=0
14:39:47.331971 [MAC ] [D] [ 3794] BSR: LCID=3 update new buffer=335
14:39:47.331976 [MAC ] [D] [ 3794] BSR: check_new_data() -> get_buffer_state_lcg(0)=0
14:39:47.331980 [MAC ] [D] [ 3794] BSR: check_new_data() -> get_buffer_state_lcg(1)=0
14:39:47.331984 [MAC ] [D] [ 3794] BSR: check_new_data() -> get_buffer_state_lcg(2)=59
14:39:47.331988 [MAC ] [D] [ 3794] BSR: check_new_data() -> get_buffer_state_lcg(3)=0
14:39:47.331993 [MAC ] [D] [ 3794] BSR: LCID=0 old_buffer=0
14:39:47.332000 [MAC ] [D] [ 3794] BSR: LCID=1 old_buffer=0
14:39:47.332003 [MAC ] [D] [ 3794] BSR: LCID=2 old_buffer=0
14:39:47.332007 [MAC ] [D] [ 3794] BSR: LCID=3 old_buffer=335
And since the buffer state of LCG=2 isn't zero, the new data for LCID=3 of that LCG is considered.
So effectivly, the BSR missed the "empty" buffer state for a fraction of time and doesn't
consider the outgoing data generated in the same TTI as new. It therefore
doesn't transmit a BSR.
in which a BSR wasn't
this was a very noisy log that was printed in pretty much
every TTI because the BSR procedure starts a SR whenever
it needs to send a regular BSR. The SR is canceled when a UL
grant arrives but the log line stays there.
Since we are printing a log when we actually signal a SR
to the PHY, this line is not needed.
this fixes the trigger logic for periodic BSRs. Previously we
would always trigger the "new data for highest priority LCID"
whenever new data becomes available for a LCID for which
a BSR has already been sent.
However, a BSR should only be sent if the priority is in fact higher
(lower int number).
the BSR routine had a bug in which it would generate a BSR even
before the reTx timer expires if new data becomes availble
for a LCID that already had data and a BSR was already sent.
The RA test here relies on a BSR in the generated MAC PDU to pass.
However, since after fixing the BSR bug the PDU the MUX unit
no longer generates a BSR, we need generate data for a LCID
which has higher priority than the one for which a BSR has
already been sent.
reported/provided by user softdev86 in https://github.com/srsLTE/srsLTE/issues/566
author tested with local 4 port cell. I am not able to verify locally but
it looks ok, we'll revise later if needed.
When using srsLTE with Lime devices, calibration was performed before any configuration steps have happened, thus making calibration values invalid. Removing Lime specific calibration step from rf_soapy_imp makes so that devices will be automatically calibrated by SoapyLMS on rf_soapy_start_stream call.
Tested and working with srsENB using LimeSDR-USB v1.4 and LimeSDR-Mini v1.2 boards.
in ZMQ runs we've seen that entering idle could take quite
a bit of time depending how quickly workers get their samples
sent or reconfigurations done.
In one example up to ~160ms
this patch increases the maximum wait time to 2s.
With the sched feature that allows scheduling in TTIs
ahead of time, there is no guarantee that when
the tti arrives to generate a sched result, the stored
raw sched_ue pointers are still valid. For this reason,
I now store the rnti and check if the rnti still exists.