Other Articles‎ > ‎

A Secure Internal Network with the Freedom to Browse

Internet in the workspace has become a serious security concern. Leakage of personal data from a number of locations in India makes the security issues an even bigger worry. We need to convince the clients that violations of policies will not be possible. Yet, I personally can't imagine how we developed code before the era of Internet, or 'googling' to be even more specific. We may not like it but we have to accept that a completely free policy of Internet is not always wise or even practical. I may want the code to be free but not my personal information.

While threats from the outside have always been recognized and are relatively easy to tackle with firewalls, internal threats are a much harder nut to crack. Far too often, the internal threats are handled in ways that are constraining and seem unreasonable. Consider the following simple options:

  1. We must browse using a proxy and a user should not be able to change his proxy settings. This policy could be enforced in a Windows environment using Active Directory policies. A very unfortunate side-effect is that Firefox is 'banned'. Firefox will allow a user to change the proxy settings and ignore the settings in Windows.

  2. A separate network is maintained for browsing. Concurrent research on the web and coding is not feasible unless you have two machines side by side.

I am sure that there are other practices which are followed as well, which may even be worse. It would be nice if one could have the security of separate networks and yet use only one system.

Separating the Activities

In order to examine this possibility, we can experiment with two machines and a broadband router. We have the gateway system which has 2 Ethernet cards, one connected to the router and the other to the internal network – consisting of at least one workstation. The gateway machine is expected to run Linux. We will not impose any OS constraints on the workstations, except moral ones.

We do not wish our internal network to be exposed to viruses. We do not wish to be able to transfer any data from the internal network. We want it to behave like two unconnected computers.

The concept of thin computers comes to our aid. We do our development on the local machine or the internal network and we do the web related work on the gateway machine.

This is easily achieved with the beauty of X, with the following command.

$ X -query <gateway> :1

We can also open a window on our existing desktop and use Xnest or Xephyr instead.

On Windows, we can install Cygwin and X server.

The inability to copy paste from one environment to the other seems irritating at first but it can also be regarded as a security benefit.

Securing the Network

The next step is easy. We want the internal network machines to be able to use X applications on the server but not be able to do anything else. This is what a firewall does best.

A relatively painless way of defining complex iptables is to use a gui tool, e.g. fwbuilder.

The following steps can be used to create the desired firewall in fwbuilder(v 2.1.10):

  1. Create a new object file

  2. Create a new firewall with 3 interfaces – loopback, internal, external

  3. Create the internal network

  4. Create new tcp and udp services – xdmcp (port 177). The X11 service is predefined and available as a part of the Standard settings.

  5. Define a new policy:

    1. If source is the internal network, destination is the internal interface on the firewall, and service is xmcp, then accept.

    2. The X server on the 'client' will use the X11 service (port 6000). Recall that the applications on the host are the clients of X server. So, if the source is the internal interface on the firewall, the destination is the internal network, and the service is X11, then accept the packets.

    3. Anything else, deny.

  6. Loopback policy is normally to accept everything.

  7. The policy for the external interface can be similarly defined based on the requirements.

We can install this firewall on the gateway system. Each workstation in the internal network can start a X session on the gateway server but should not be able to do anything else.

Getting more out of the Server

It is suggested that the default desktop on the gateway server is a light one, e.g. icewm. It can reduce the resource requirements on the server substantially. Or a better way of looking at it, given a server, a lot more workstations can connect to it and continue to get a very good response. I recall connecting about 50 thin clients on a 4 GB server, but obviously it will depend a lot on what each workstation is doing.

The role of the server need not be limited to browsing. External email is obviously a need. The gateway can also act as the mail server. The mail access can be provided only over the browser using, e.g. squirrelmail.

An organization can allow a wide range of options on the server without worrying about contamination of the local network or leakage of information to the web from the local network.

Relaxing some Conditions

We may decide that there is need for some data from the web to come to the local network. We will assume that this permission is to be given to project leads only, or more precisely, the machines allocated to the project leads. We can export the desired directory on the server using nfs or Samba, making sure that read only access is being provided.

This needs a change in the firewall to allow access to nfs or smb to the machines of the project leads. The additional steps are:

  1. Define a host for each project lead

  2. Create a group of the project leads to which includes the hosts of the various project leads.

  3. Create a new role which accepts packets with source being the group of project leads, the destination being the gateway and services being SMB or NFS.

    For NFS, nfs, sunrpc and mountd ports are needed by an nfs client client. The port for mountd is dynamically allocated by portmapper. It may be necessary to assign a fixed port for a firewall. The command rpcinfo will give information about the ports being used by nfs/rcp related servers.

    For SMB, smb/microsoft-ds (port 445) and netbios related ports (ports 137-139) are needed.

We can relax other conditions if required. For example, we may allow the project managers write access as well, as finally the code has to be delivered. This is not to say that the higher the position, more trustworthy the person. Rather the goal is an absence of anonymity and ease of tracking a violation. These, in turn, make policy violation an unacceptably high risk proposition and help keep a person on the straight path. If we know that a radar is checking the car speeds on a road, we tend to drive within legal limits.

Concluding Remarks

Security does not have to be debilitating. A good policy is one which does not get in the way of a person trying to do his or her work. There will, then, be no temptation to find a way to bypass the unreasonable constraints. Good policies should also not reduce efficiency. We can use Linux or any other Unix-like environment to find elegant solutions to problems.