Developer Notes
This section provides a guide for anyone testing existing features, fixing a bug or adding a new feature to the Access Gateway. All developers are highly encouraged to maintain this guide to make sure it is up to date and continues to grow.
Configuration/system settings
If you have a gateway running in a VM (as described in the Quick Start
Guide), the magma
directory is shared between
the guest and host machine, so changes made on either system reflect on the
other. Exceptions to this rule are the systemd unit files and python scripts.
Changes to these files on the guest or host need to be manually synced.
Configuration files/directories
/etc/magma/
: location of default configurations and templates for all services/etc/magma/gateway.mconfig
: main file that contains the configuration for all services, as exposed via the Orc8r API/var/opt/magma/configs/gateway.mconfig
: For gateways connected to an orchestrator, the configuration from Orc8r is periodically streamed to the gateway and written here. This streamed config takes precedence over/etc/magma/gateway.mconfig
./etc/magma/<service_name>.yml
: Service configuration file, in YAML format. These configurations are local and are not exposed through the API. These include logging level, local network interface names, etc./etc/magma/templates/<service_name>.conf.template
: This contains the structured template for the <service_name>.conf file used as input to some services, such as Control-proxy, Dnsd, MME and Redis./var/opt/magma/tmp/<service_name>.conf
: The configuration file read by some services, such as Control-proxy, Dnsd, MME and Redis, at start-up. This file is generated by mapping the configuration values fromgateway.mconfig
and<service.yml>
to the template defined in<service_name>.conf.template
.
Systemd unit configuration files
/etc/systemd/system/magma@<service_name>.service
: Systemd unit files for Magma service. Note that these files are maintained undermagma/lte/gateway/deploy/roles/magma/files/systemd/
and are copied into the/etc/systemd/system
directory of the VM at the time of provisioning. You need to manually sync changes to these files between guest and host OS.
Python scripts to generate configs
generate_<service>_config.py
: Scripts that generate the<service_name>.conf
file for some services. These are executed every time a service starts.
Note that these files are maintained under magma/lte/gateway/python/scripts
and copied to the /usr/local/bin
directory in the guest host at the time of
provisioning. Changes to these scripts need to be manually synced between the
guest and host OS.
Testing
Connecting a physical eNodeB and UE to Gateway VM
While the S1ap integration tests provide a simulated UE(s) and eNodeB(s) to test your AGW VM while actively developing, one can extend the testing to physical UE and eNodeB. To connect a physical eNodeB to the gateway VM:
- Connect the eNodeB to a port on the host machine, say it is interface
en9
. - From the VirtualBox GUI, switch the Adapter 2 (for
eth1
interface) fromHost-only
toBridged
mode and bridge it to interfaceen9
from above. - In gateway VM, modify the
nat_iface
in/etc/magma/pipelined.yml
frometh2
toeth0
. Restart all services. - In the gateway VM, follow the steps in EnodeB
Configuration. Make sure the
earfcn
set in the enodebd section ofgateway.mconfig
is the one that is supported by the eNodeB under consideration. Use this calculator to get theearfcn
corresponding to the frequency range listed on the eNodeB.
To connect a physical UE to the gateway VM,
- Use a programmable SIM which is provisioned with the LTE auth key that you will use in the EPC.
- On the gateway VM, add the subscriber using the CLI:
magtivate
subscriber_cli.py add --lte-auth-key <base64 LTE auth key> IMSI<15 digit IMSI>
- On the UE, turn airplane mode on, then off, to trigger a fresh attach
Connecting a physical AGW to S1AP test VM
Another useful combination is to run the S1ap integration tests on a physical AGW directly connected to the host machine running the magma_test VM.
On the magma_test host machine
- Connect
eth1
of the physical AGW to a port on the host machine, say it is interfaceen9
. - From the VirtualBox GUI, switch the Adapter 1 (for
eth1
interface) fromHost-only
toBridged
mode and bridge it to interfaceen9
from above.
On the physical AGW
- Change the static IP address of
eth1
to match the one expected by the test VMsudo sed -i 's/address 10.0.2.1/address 192.168.60.142/g'
/etc/network/interfaces.d/eth1
sudo ifdown eth1
sudo ifup eth1
sudo ip addr del 10.0.2.1/24 dev eth1
- Enable DEV_MODE
sudo bash -c 'echo "MAGMA_DEV_MODE=1" >> /etc/environment'
- Enable Pipelined
ryu_rest_service
grep -qF 'ryu_rest_service' /etc/magma/pipelined.yml || sudo sed -i
"s/static_services: \[/static_services: \[ \'ryu_rest_service\',/g"
/etc/magma/pipelined.yml
- Restart all services
sudo service sctpd restart
On the test VM
- Change credentials
sudo sed -i 's/"vagrant"/"magma"/g'
/home/vagrant/magma/lte/gateway/python/integ_tests/s1aptests/s1ap_utils.py
- Disable use of
magtivate
sudo sed -i 's/magtivate_cmd + " && " + //g'
/home/vagrant/magma/lte/gateway/python/integ_tests/s1aptests/s1ap_utils.py
Now, you can run the S1ap integration tests as usual.
Debugging
Logs
To change the logging level for a particular service, please modify the
log_level
in /etc/magma/<service_name>.yml
. If log_level
is missing just
add it. You can use INFO
, WARNING
, ERROR
, DEBUG
. When log_level
is
missing, the default level is the lowset, INFO
To see GRPC messages on your log, check next section
You can check the logs in different places:
/var/log/syslog
: gives a good view of all the Magma services running on the AGW. This is a good place to check whether the AGW is connecting to the orchestrator, any GRPC errors or which service is causing a cascaded crash (e.g. a crash in Sessiond can cause Mme service to terminate). A good way to filter the logs from individual processes is withjournalctl
. For example, to look at logs from SubscriberDb use:sudo journalctl -fu magma@subscriberdb
/var/log/mme.log
is a symbolic link that points to the latest log file created by the MME service. The Mme service creates a new log file with nameMME.magma-dev.root.log.INFO.<date>-<time>.<PID>
every time the service is (re)started. The AGW maintains the 10 most recent log files./var/log/enodebd.log
contains the logs from Enodebd
Enabling GRPC printing on log
In case you want to see the content of the messages you will have to enable
GRPC printing. You will see the content of GRPC on the same log mentioned in
the section previous (/var/log/syslog
). By default, it is disabled.
Note GRPC printing is independent of the log_level
. So you can enabled GRPC
printing even that your level is set as INFO
.
To enable GRPC logging for magmad
, sessiond
, pipelined
, mobilityd
,
directoryd
, subscriberdb
or state
you can modify this line on the
/etc/magma/<service_name>.yml
config file:
print_grpc_payload: true
CLIs
Many services have a command line interface (CLI) that can be used for debugging and configuration. Check the AGW Readme for more details.
Analyzing raw network packets
If you want to look at the raw packets sent over the network, tcpdump
is a
useful tool to capture and save them to a file. It is not installed
by default on the access gateway, so you need to do sudo apt-get install tcpdump
to install it. Next, you can:
- Capture packets on the eNodeB
interface:
sudo tcpdump -i eth1 -w <file_name>
- Capture packets on all interfaces (filter out SSH traffic so your local SSH
session doesn't bloat the capture):
sudo tcpdump -i any "not port 22" -w <file_name>
Analyzing coredumps
If any of the AGW services crash with a segmentation fault, a coredump is
preserved in a directory named core-<timestamp>-<process_name>[-<PID>]
under
the /tmp
directory on the AGW. For example:
core-1558015879-ITTI_bundle
is a coredump from MMEcore-1585761965-sessiond-12741_bundle
is a coredump from sessiondcore-1582710288-python3-17823_bundle
is a coredump from a python service running with PID 17823
For the coredumps generated by MME or sessiond, you can read them through gdb
,
as follows:
cd /tmp/<core-directory>/
gunzip <core gzip file>
gdb /usr/local/bin/<process name> <unzipped core file>
From within the gdb shell, bt
command will display the backtrace for the
segmentation fault.
Running MME with gdb
If you need to debug MME with gdb, make sure all the services that it is dependent on are already running. Follow the steps below:
sudo service magma@magmad start
sudo service magma@mme stop
sudo service sctpd start
sudo service magma@mobilityd start
sudo service magma@pipelined start
sudo service magma@sessiond start
sudo gdb /usr/local/bin/mme
Checking Redis entries for stateless services
When the services are running in stateless mode, as described in Testing
stateless Access Gateway, you can
connect to the redis service with redis-cli -p 6380
. Then on the shell, you
can list all the keys with KEYS *
. The keys for state follow the pattern
<IMSI>:<Service/MME task>
. For example:
IMSI001010000000001:SPGW
is the state preserved for IMSI 001010000000001 by SPGW task in Mme service- Keys such as
spgw_state
,s1ap_state
,mme_nas_state
are used to store gateway wide state for a particular task in the MME process - Mobilityd stores state with the key
mobilityd:sid_to_descriptors
For tracing the state values, there's a specific CLI for debugging
different Magma services' data written into Redis. You can access it through
the state_cli.py
command.
For example, state_cli.py keys s1ap
will display the current keys that match
the s1ap
pattern.
state_cli.py keys s1ap
s1ap_state
IMSI001010000000001:S1AP
Then you can access a given key by doing state_cli.py parse <key>
:
state_cli.py parse s1ap_state
enbs {
key: 1
value {
enb_id: 138777000
s1_state: 2
enb_name: "HeNodeb"
s1ap_enb_assoc_clean_up_timer {
id: -1
sec: 4
}
sctp_assoc_id: 1
next_sctp_stream: 1
instreams: 2
outstreams: 2
}
}
num_enbs: 1
Combining XML Unit test reports
At the moment of writing, running the unit test targets does not generate the aggregated XML report file by default. To get the report of all unit tests, you have two options.
Option 1: Running unit test target
If you prefer a simple method, you can simply do:
cd ${MAGMA_ROOT}/lte/gateway/
make test_oai_runtime
And you will get the combined report at ${MAGMA_ROOT}/report/merged_report/report_all_tests.xml
Option 2: Manual method
The script runtime_report.py
is intended to use as in option 1. This section is only served as a guideline in case, in the future, a developer would like to make some changes to this script or how to use it.
Enabling generating the unit test report
Setting the value for the $GTEST_OUTPUT
as below:
export GTEST_OUTPUT=xml:/where/you/want/to/store/the/reports/
magma
VM
Running the OAI unit test targets as usual on make test_oai
The report XML files are generated inside your desired folder.
Combine all the reports into a single report file
Assume that you chose to store all the generated report files at /where/you/stored/the/reports/
which is called the working directory here. You can simply set a regex pattern that matches the relative paths of all XML report files from the working directory as in the example below.
cd ${MAGMA_ROOT}/lte/gateway/
## optional: activate your python virtual environment
python3 python/scripts/runtime_report.py -i [regex_pattern_matches_relative_path_of_report] -w /where/you/stored/the/reports/
For example, when you generated all the .xml
report files at ${MAGMA_ROOT}/logs/
and want to combine them, you can run:
python3 python/scripts/runtime_report.py -i .+\\.xml$$ -w ${MAGMA_ROOT}/logs/
By default, the XML output file will be generated at ${MAGMA_ROOT}/report/merged_report/report_all_tests.xml
.
You can overwrite this by using the -o
option of runtime_report.py
. Let's reuse the example above:
python3 python/scripts/runtime_report.py -i .+\\.xml$$ -w ${MAGMA_ROOT}/logs/ -o /where/you/want/to/store/this/file.xml