How To Actually Secure Your Linux Server
Looking around the Internet you can find a never ending debate around what you can do to protect your Linux server that’s exposed to the…
Looking around the Internet you can find a never ending debate around what you can do to protect your Linux server that’s exposed to the Internet. One of the things you need to keep in mind is you need to take a defense in depth position around security. For example, you can’t just run patches on your system every month and run an exposed web application running as root. Especially when you can get the same web application running as an unprivileged user. Furthermore, if you could get away with not exposing it to the Internet and instead using a peer-to-peer VPN, that would be extremely more secure.
Defense In Depth
Defense in depth is a security strategy that involves implementing multiple layers of protection for a system or network. The goal of defense in depth is to create a redundant and overlapping series of safeguards that can help to prevent or mitigate the impact of a security breach.
In a defense in depth strategy, multiple security measures are implemented at different levels of the system, including network-level, host-level, and application-level controls. For example, a network might have firewalls, intrusion detection systems, and VPNs at the network level, as well as antivirus software and host-based firewalls on individual hosts.
The idea behind defense in depth is that if one layer of protection is breached, the other layers will still be in place to provide additional protection and prevent or mitigate the impact of the breach. This can help to increase the overall security of the system and reduce the risk of a successful attack.
Defense in depth is a proactive approach to security that involves continuously assessing and improving the security of a system, rather than simply reacting to threats as they arise. It is generally considered to be a best practice for securing systems and networks.
No Public Remote Access
There are several ways to access a server without port-forwarding, depending on the specific needs and requirements of the situation. Some options include:
- Using a traditional VPN (Virtual Private Network): A VPN can be used to establish a secure, encrypted connection to a remote server, allowing you to access the server as if you were on the same local network. This can be a useful option if you need to access a server from a remote location and port-forwarding is not possible.
- Tailscale: is a peer-to-peer mesh VPN that is very easy to deploy on devices using Wireguard VPN. You can easily deploy it and access web servers and SSH/RDP. You can enable relays to allow other devices to route all traffic through it, acting as a traditional VPN while still giving you the benefit of peer-to-peer.
- Zerotier: is another peer-to-peer mesh VPN but this works at Layer 2 of the OSI model. Meaning, unlike Tailscale, these devices can see much lower traffic like broadcasts and looks like they’re on the same network as you.
- CloudFlare Tunnel: This is a free service by Cloudflare to access web applications without the need for opening ports in your network. You just install their Tunnel software on a server, and setup the forwarding there. Then you can use Cloudflare’s network and firewall to forward all traffic. No one see’s your real IP, or can access your system publicly over the Internet.
A mesh VPN (Virtual Private Network) is a type of VPN that allows multiple devices to connect to each other over a network and form a mesh network. In a mesh VPN, each device acts as a VPN client and server, allowing other devices to connect to it and form a network.
One of the main benefits of a mesh VPN is that it allows devices to connect directly to each other, rather than going through a central server or gateway. This can make the network more resilient and allow it to continue functioning even if some devices are offline or disconnected.
Read-Only File System
A read-only file system can provide some security benefits by preventing users from making changes to the files and directories on the system’s file system. This can help to prevent accidental or malicious modifications to the main OS file system, and can also help to ensure that the file system remains in a known and trusted state. This doesn’t necessarily mean all file systems, like home directories, are read only; just the file system the OS is installed to. This can prevent a malicious actor from dropping malware into trusted system files.
However, it’s important to note that a read-only file system is not a complete security solution, and does not provide protection against all types of threats. For example, a read-only file system would not prevent a user with access to the system from installing malicious software or modifying system configuration files (if they have elevated privileges). It would also not prevent an attacker from exploiting vulnerabilities in the operating system or other software running on the system.
Hash Verification
Hash verification can be used to verify the integrity of a system or alert when files have been altered. You can either use software to protect, alert, or use commands to write scripts to perform these actions yourself.
- AIDE (Advanced Intrusion Detection Environment): AIDE is an open-source security and integrity monitoring tool that can be used to detect changes to critical files and directories on a Linux system. It uses hashes to verify the integrity of monitored files, and can alert administrators if any changes are detected.
- OSSEC: OSSEC is an open-source host-based intrusion detection system (HIDS) that can be used to monitor and protect Linux systems. It includes a file integrity monitoring module that can be used to detect changes to critical files and directories, and can alert administrators if any changes are detected.
- Samhain: Samhain is an open-source host-based intrusion detection system (HIDS) that can be used to monitor and protect Linux systems. It includes a file integrity monitoring module that can be used to detect changes to critical files and directories, and can alert administrators if any changes are detected.
- Lynis: Lynis is an open-source security auditing tool that can be used to audit and harden Linux systems. It includes a file integrity monitoring module that can be used to detect changes to critical files and directories, and can alert administrators if any changes are detected.
- Hashdeep: A command-line utility that can calculate multiple hashes (such as SHA-256, MD5, and SHA-1) for a file or directory. It can also be used to compare the calculated hashes to known-good hashes, and can display a report of any discrepancies.
- Tripwire: A commercial security and integrity monitoring tool that can be used to detect changes to critical files and directories on a Linux system. Tripwire uses hashes to verify the integrity of monitored files, and can alert administrators if any changes are detected.
These are just a few examples of hash check software available for Linux. There are many other options available, and the best option for your needs will depend on your specific requirements and the type of system you are using.
Unprivileged Applications
It is generally not recommended to run applications as the root user in Linux or other Unix-like operating systems because it can expose the system to security risks. The root user has complete control over the system, and any vulnerabilities in a root-privileged application could potentially be exploited to compromise the entire system.
By running applications as a non-root user with limited privileges, you can help to mitigate the risk of vulnerabilities being exploited. If an attacker were to compromise a non-root application, they would be limited in the actions they could take on the system, as they would not have access to the full range of root privileges.
In addition, running applications as root can also make it more difficult to track and manage changes to the system, as the root user can make any changes they want without leaving a trace. This can make it more difficult to troubleshoot issues and maintain the system over time.
Patching
The frequency with which you should patch your applications depends on a few factors, including the type of application, the potential impact of vulnerabilities, and the availability of patches. In general, it’s a good idea to patch your applications as soon as patches are available, especially if the vulnerabilities that they address are considered high-risk or critical.
For applications that are critical to the operation of your business or that handle sensitive data, it may be necessary to patch them more frequently. For example, if you are using an application that is connected to the Internet and handles financial transactions, you may want to patch it more frequently to ensure that it is secure.
On the other hand, for applications that are not as critical or that do not handle sensitive data, it may be sufficient to patch them on a less frequent basis. For example, you may be able to patch a productivity application every few months, rather than every time a patch is released.
You will need to way your risk level and patch accordingly. For me, anything exposed to the Internet takes priority and in many cases (for my home setup) is automatically patched as I have backups to recover from. In an enterprise environment, there should be policy in place to allow patching critical vulnerabilities with an Emergency Change Advisory Board request.
Firewall
While it’s not always needed, especially if you’re not opening ports on a system to allow inbound traffic, you should consider a firewall if you want to take things a bit further. I would highly recommend using the firewall to not only control the inbound traffic, but also the outbound. If your web server only needs to allow established connections out, or new connections out to certain IPs, then you should lock that down. This will allow you to not only control who can come in, but what can go out. For example, data exfiltration or using your system in a botnet. Some popular Linux firewalls include:
- IPtables: IPtables is a Linux utility that allows administrators to configure rules for how to handle incoming and outgoing network traffic. It is a powerful and flexible firewall that can be used to control access to a network or a device, and is often used to secure servers and other critical systems.
- UFW (Uncomplicated Firewall): UFW is a firewall utility that is designed to be easy to use and configure. It is based on IPtables, and provides a simple command-line interface for managing firewall rules.
- Firewalld: Firewalld is a firewall daemon that is designed to provide a dynamic firewall solution for Linux systems. It allows administrators to define zones for different network interfaces and to apply rules to those zones.
- Shorewall: Shorewall is a firewall utility that is designed to be flexible and easy to use. It allows administrators to define firewall rules using a configuration file, and provides a range of features and options for controlling access to a network or a device.
Remote Authentication
SSH RSA (Secure Shell Remote Access) is a type of encryption used to secure remote connections to a device or network. RSA (Rivest-Shamir-Adleman) is a widely used public-key cryptosystem that is used to encrypt and sign data. In the context of SSH, RSA is used to securely exchange keys between a client and a server, allowing the client to authenticate the server and establish an encrypted connection.
There are several benefits to using SSH RSA for authentication:
- Improved security: Because SSH RSA uses keys rather than a password, it is less vulnerable to brute-force attacks and other password-based attacks.
- Ease of use: SSH RSA keys can be stored in a secure location, such as a hardware token, making it easy for users to authenticate without having to remember a complex password.
- Auditability: SSH RSA key authentication can be logged, allowing administrators to track and monitor access to the server.
SELinux
SELinux (Security-Enhanced Linux) is a Linux kernel security module that provides a flexible mandatory access control (MAC) system for controlling access to resources in the operating system. SELinux was developed by the United States National Security Agency (NSA) and is now included in many Linux distributions, including Red Hat, CentOS, and Fedora.
SELinux allows administrators to define policies that specify which users and processes have access to specific resources, such as files, directories, and network ports. It operates at a lower level than traditional Linux access controls, such as file permissions and ownership, and provides an additional layer of security by enforcing policies on all processes and users, regardless of their privileges.
SELinux is often used in enterprise and government environments to secure servers and other critical systems. It can help to prevent unauthorized access and privilege escalation, and can also be used to enforce compliance with security policies.
Logging and Alerting
There are many logging software options that you can use in your environment, depending on your specific needs and requirements. Some popular options include:
- Graylog: An open-source log management platform that can be used to collect, index, and analyze log data from a variety of sources. Graylog offers a web-based interface for searching and analyzing log data, and can be deployed on-premises or in the cloud.
- Splunk: A commercial log management platform that can be used to collect, index, and analyze log data from a variety of sources. Splunk offers a range of features and integrations, and is suitable for use in enterprise environments.
- ELK Stack (Elasticsearch, Logstash, and Kibana): An open-source log management platform that consists of Elasticsearch (a search and analytics engine), Logstash (a data collection and processing pipeline), and Kibana (a visualization tool). The ELK Stack can be used to collect, index, and analyze log data, and offers a range of customization options.
- Logz.io: A cloud-based log management platform that can be used to collect, index, and analyze log data from a variety of sources. Logz.io offers a range of features and integrations, and can be used for both small and large-scale deployments.
No Access to Proc
The /proc directory is a special directory in the Linux file system that contains virtual files that provide information about the system and its processes. These virtual files do not exist on a physical storage device, but are generated on the fly by the kernel when they are accessed. When I managed a Linux system with a bunch of students accessing it we would not allow students to access the /proc directory.
This allowed us to prevent them from seeing other users on the system, or being able to see the processes of other users. For the purpose of that server (submitting assignments) this was acceptable and provided just a little extra security.
The /proc directory contains a number of subdirectories and files, including:
- /proc/cpuinfo: Contains information about the CPU, including the model, architecture, and other details.
- /proc/meminfo: Contains information about the system’s memory, including the total amount of memory, the amount of used and free memory, and other details.
- /proc/modules: Contains a list of the kernel modules that are currently loaded.
- /proc/version: Contains information about the version of the Linux kernel that is running.
- /proc/[PID]: For each process running on the system, there is a subdirectory in /proc with the process ID (PID) as the name. These subdirectories contain information about the process, including its status, memory usage, and other details.
Insecure Protocols
This probably goes without saying, but I will say it anyway, you should NOT being using insecure protocols. Not even HTTP, especially if it’s exposed to the Internet. There’s too many free security solutions for this to be acceptable anymore. There are several network protocols that are considered to be insecure and should generally be avoided:
Telnet: Telnet is a protocol for remotely accessing and managing devices over a network. It is considered to be insecure because it transmits data in plaintext, making it vulnerable to interception and tampering.
FTP (File Transfer Protocol): FTP is a protocol for transferring files over a network. It is considered to be insecure because it transmits data in plaintext, and also because it does not provide authentication or encryption by default.
HTTP (Hypertext Transfer Protocol): HTTP is a protocol for transferring data over the web. It is considered to be insecure because it transmits data in plaintext and does not provide encryption by default.
RDP (Remote Desktop Protocol): RDP is a protocol for remotely accessing and controlling a computer over a network. It is considered to be insecure because it does not provide encryption by default, and because it has a history of vulnerabilities that have been exploited by attackers.