all calls that manipulate the RLC and/or PDCP arrays suffer
from a high deadlock risk if a PHY worker holds the RLC
AM Rx mutex at the same time when the stack wants to carry
out this reconfiguration.
this applies to RRC Reconfigs, but potentially also to RRC Connection
Reestablishment or even RRC Connection Setup, although this should
seldom be the case.
By breaking the call stack between RLC->PDCP->RRC->RCL and
carrying out the reconfig as a single task without holding the
RLC readlock the deadlock should not happen anymore.
This should fix issue #1593
- Import the srslog project into srslte.
- Ported srsue app to use the new logging framework.
- Implemented a wrapper that dispatches log entries to srslog.
- Renamed an existing log test to be more specific to avoid name clashes.
* add locked and unlocked version of has_data() since one is
called from stack and one from PHY threads
* add comments in each interface section as to why locking
is required or not
* remove RLC rwlock when not required
* move calls only used by RRC to RRC section
this patch refactors the SDU queuing and dropping policy of the RLC and PDCP layer.
the previous design had issues when packets have been generated at a higher
rate above the PDCP than they could be consumed below the RLC.
When the RLC SDU queues were full, we allowed two policies, one to block on the write
and the other to drop the SDU. Both options are not ideal because they either
lead to a blocking stack thread or to lost PDCP PDUs.
To avoid this, this patch makes the following changes:
* PDCP monitors RLC's SDU queue and drops packets on its north-bound SAP if queues are full
* a new method sdu_queue_is_full() has been added to the RLC interface for PDCP
* remove blocking write from pdcp and rlc write_sdu() interface
* all writes into queues need to be non-blocking
* if Tx queues are overflowing, SDUs are dropped above PDCP, not RLC
* log warning if RLC still needs to drop SDUs
* this case should be avoided with the monitoring mechanism
apply same change that we've done on the eNB also on the UE
to avoid the PHY processing TTIs faster than the stack.
Without that, we see lots of those in the logs:
...
08:39:17.580325 [STCK] [W] Detected slow task processing (sync_queue_len=7).
...
before entering RRC idle, after receiving a RRC connection release for example,
we need to wait until the RLC for SRB1 or SRB2 have been flushed, i.e.
the RLC has acknowledged the reception of the message.
Previously we have only waited for SRB1 but the message can also be received on SRB2
and in this case both bearers need to be checked.
The method is now streamlined to check both SRBs and is also used when
checking the msg transmission of an detach request.
* Added the appropriate code for handling and sending the
re-establishment procedure messages to rrc_ue.c/.h.
* Triggered RRC reconfiguration after the reception of RRC
re-establishment complete
* Refreshed K_eNB at the reception of re-establishment
request
* Changed the mapping of TEIDs to RNTIs in the GTP-U layer,
as the RNTI might change with reestablishment.
Bugfix the wrong ra_rnti calculation in ra_proc::state_pdcch_setup.
According to TS 36.321 Subsection 5.1.4 Random Access Response reception, we can see the formula on RA-RNTI, which is,
RA-RNTI= 1 + t_id + 10*f_id,
where t_id is the index of the first subframe of the specified PRACH (0≤ t_id <10), and f_id is the index of the specified PRACH within that subframe, in ascending order of frequency domain (0≤ f_id< 6). Then, reading the srslte source code, we can see that, the code should bugfix.
BTW, the wrong code can run normal for LTE_FDD, causing of the info_f_id = 0; but it should be wrong, when it is LTE_TDD.