Introduction
Tunnelize is a self-hosted tunnel and server. It allows users to create secure tunnels between two endpoints, ensuring that data transmitted over the network is encrypted and protected. It supports forwarding HTTP, TCP and UDP traffic and supports encrypted connections, monitoring via CLI, and more.
How this works
When you set up a tunnelize server, based on the configuration, server will create one or more endpoints. These endpoints serve as entry points (HTTP, TCP, UDP) for anyone you are trying communicate with (client). These clients will connect to the endpoint itself and tunnelize server will find the available tunnel for that endpoint (for example on TCP it is based on port, for HTTP its based on Host header, etc.).
If there is an available tunnel, tunelize will send a request to create a link session between that tunnel and the client. Once the link is estabilshed, data will be forwarded between both parties until one party closes the connection.
Features
- Traffic tunneling: Tunnel local traffic to HTTP/HTTPS, TCP and UDP endpoints
- Secure Connection: Securely connect to the tunnel server and allow secure connections on endpoints
- Client Provisioning: Initialize client by downloading settings from tunnel server.
- Monitoring: View and manage active tunnels, clients and links via CLI or JSON API
Quickstart
Download tunnelize for your system from releases page.
Then initalize configuration by running tunnelize init
. This will create a tunnelize.json
with default tunnel
and server configuration.
Run a local HTTP server on port 8080. This will be the server we forward traffic from.
Run tunnelize server
. This will run the server at port 3456 (by default) for the main server, creating listeners for all default
endpoints (the default HTTP endpoint at 3457).
Run tunnelize tunnel
. This will connect to the server at port 3456 and tunnel traffic from your local server. In the response you will see the URL assigned for you
to tunnel from, assuming default config, it will be something like:
[Forward|http] localhost:8080 -> http://tunnel-myname.localhost:3457
Open a browser and connect to http://tunnel-myname.localhost:3457 (this should work as expected in modern browsers like Chrome and Firefox) and you be able see the results from your local server at 8080.
See other topics for setup information:
Setting up a server
To setup a server, first initialize the configuration by running tunnelize init server
.
This will create initial default configuration in tunnelize.json
for a server, see Configuration for information about
specific attributes.
Server will be run by just running tunnelize
or tunnelize server
, after which
tunnelize server is ready to accept connections.
For information on how to setup a server as a service so that it keeps running even after OS restarts see here.
Configuring the server
Following is a typical default settings for a server:
{
"server": {
"endpoints": { /* ...endpoints... */ },
}
}
As it can be seen, only endpoints
parameter is required to define which endpoints you will allow server to tunnel
for you.
Below are all available parameters:
Field | Description | Default Value |
---|---|---|
server_port | Port on which the server listens for tunnel connections. | 3456 |
server_address | Address to which the server will bind to. | 0.0.0.0 |
max_tunnel_input_wait | Maximum amount of time (in seconds) to wait from tunnel connection to first message from tunnel. | 30 |
tunnel_key | Key which tunnel must have in order to be allowed to communicate. | No key required |
monitor_key | Key which tunnelize tunnel must have in order to execute monitor commands on the server. | No key required |
endpoints | Configuration for server endpoints. See endpoints for more information. | No default |
encryption | TLS encryption settings. See encryption | No encryption |
max_tunnels | Maximum number of tunnels allowed on the server. | 100 |
max_clients | Maximum number of clients allowed on the server. | 100 |
max_proxies_per_tunnel | Maximum number of proxies per tunnel allowed. | 10 |
Configuring Encryption
It can be one of the two types:
No encryption required
{
"type": "none"
}
In this case any tunnelize client can connect and pass data in unencrypted connection. This means that all data passed between tunnel and server is visible to a third party.
TLS encryption
{
"type": "tls",
"cert_path": "/path/to/certificate/file",
"key_path": "/path/to/key/file"
}
Standard TLS encryption will be used. Keep in mind that in this case Tunnel must also use encryption with a certificate authority (if using self-signed) set or
with native-tls
if you are using a known certificate authority like Let's Encrypt.
See setting up certificates for information on how to use certificate files.
Configuring Endpoints
Endpoints are configured as follows:
{
"server": {
// ... other fields
"endpoints": {
"endpoint-name-1": {
"type": "http",
// ...configuration for HTTP endpoint
},
"endpoint-name-2": {
"type": "tcp",
// ...configuration for TCP endpoint
},
// ... other endpoints
},
}
}
Keys endpoint-name-1
and endpoint-name-2
are endpoint names and they are arbitrary so you can set them to anything
you wish as long as they are lowercase and alphanumeric. You can create any number of endpoints to where clients can
connect to your local servers. Each endpoint name has to be unique since this is the name that you will need to use
tunnel configuration for the proxies.
There are multiple types of endpoints:
Setting up a service
Tunnelize server will keep running as long as you dont stop it if you do not restart the server. To keep it running all of the time it is best to setup a service daemon to run it in background, below are the common ways of setting it up (assuming you are running linux).
Using Systemd
Systemd is a system and service manager for Linux operating systems. It is responsible for initializing the system, managing system processes, and handling system services. Systemd provides a standardized way to manage services, including starting, stopping, enabling, and disabling them. It uses unit files to define services and their configurations, allowing for consistent and efficient service management. Systemd also handles dependencies between services, ensuring that services start in the correct order and that required services are available when needed.
Create a new file named tunnelize.service
in the /etc/systemd/system/
directory (check your linux distribution for correct paths) with the following content:
[Unit]
Description=Tunnelize Service
After=network.target
[Service]
Type=simple
ExecStart=/path/to/tunnelize
WorkingDirectory=/path/to/your/config
Restart=on-failure
User=nobody
Group=nogroup
[Install]
WantedBy=multi-user.target
Make sure to replace /path/to/tunnelize
with the actual path to the Tunnelize executable and /path/to/your/config
with the directory where tunnelize.json
is located if you are running
tunnelize server --config=path
. Set the user to the desired user which will run the process (check information about Systemd).
After setting up everything reload systemd daemon to apply the changes:
sudo systemctl daemon-reload
After this your service is discoverable but it is not enabled. To enable it run:
sudo systemctl enable tunnelize
Then start your service:
sudo systemctl start tunnelize
To see the logs and status of tunnelize service run:
sudo systemctl status tunnelize
Using Supervisor
Supervisor is a process control system for UNIX-like operating systems. It allows you to monitor and control multiple processes, ensuring they stay running and automatically restarting them if they fail. This is particularly useful for managing long-running services and applications, providing a simple way to keep them operational without manual intervention.
Supervisor does not usually come with your linux distribution. First it must be installed. You can do so by running:
sudo apt-get update
sudo apt-get install supervisor
Check your linux distribution for proper installation of supervisor if not using apt-get
(linux distributions like Ubuntu or Debian).
Create a Supervisor Configuration File for Tunnelize:
Create a new configuration file for Tunnelize in the Supervisor configuration directory, typically located at /etc/supervisor/conf.d/
. Name the file tunnelize.conf
and add the following content:
[program:tunnelize]
command=/usr/local/bin/tunnelize
directory=/path/to/your/config
autostart=true
autorestart=true
stderr_logfile=/var/log/tunnelize.err.log
stdout_logfile=/var/log/tunnelize.out.log
user=nobody
Make sure to replace /usr/local/bin/tunnelize
with the actual path to the Tunnelize executable and /path/to/your/config
with the directory where tunnelize.json
is located.
Set the user to the desired user which will run the process.
After creating the configuration file, update Supervisor to recognize the new service:
sudo supervisorctl reread
sudo supervisorctl update
Start the Tunnelize service using Supervisor:
sudo supervisorctl start tunnelize
You can check the status of the Tunnelize withservice:
sudo supervisorctl status tunnelize
Setting up certificates
Certificates are needed in order for you to use encrypted connections in tunnelize. Certificates can be self-signed and using a certificate authority like Let's Encrypt.
Self-signed certificates
Self-signed certificates are SSL/TLS certificates that are signed by the same entity whose identity they certify. Unlike certificates issued by a trusted certificate authority (CA), self-signed certificates are not automatically trusted by browsers and operating systems. They are typically used for testing, development, or internal purposes where trust can be manually established.
In order to generate self signed certificates we will use openssl
command.
Generating Certificate Authority (CA)
First step is to generate a certificate authority.
openssl genrsa -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt \
-subj "/C=US/ST=State/L=City/O=YourCA/CN=localhost"
This will generate a ca.crt
file which you will will use by the tunnel to validate the server certificate.
Certificates have an expiry time. In this example expiration is set to 1 year.
Make sure you replace /C=US/ST=State/L=City/O=YourCA/CN=localhost
with proper values for your certificate, in this
case a dummy localhost certificate will be created.
Here's a breakdown of those OpenSSL Distinguished Name (DN) parameters used in certificate generation:
Parameter | Name | Description | Example |
---|---|---|---|
/C | Country | Two-letter country code | US for United States |
/ST | State | State or province name | California |
/L | Locality | City or locality name | San Francisco |
/O | Organization | Organization or company name | Example Corp |
/CN | Common Name | Fully qualified domain name (FQDN) | localhost or example.com |
You can also import this certificate authority file into your operating system or a browser to make it a trusted certificate.
Generating a server certificate
Next step is to generate a server certificate. Before we can do that we need to setup a server.conf
configuration
file for the certificate.
Here is an example file for the configuration:
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = State
L = City
O = Organization
CN = localhost
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
IP.1 = 127.0.0.1
Here is aa breakdown of the configuration file:
Section | Purpose | Description |
---|---|---|
[req] | Request Settings | Main configuration section for certificate requests |
[req_distinguished_name] | DN Information | Contains the certificate subject information |
[v3_req] | X509v3 Extensions | Defines certificate capabilities and constraints |
[alt_names] | Subject Alternative Names | Defines additional hostnames/IPs for the certificate |
Let's look at each section in detail:
[req] Section
distinguished_name = req_distinguished_name # Points to DN section
req_extensions = v3_req # Points to extensions section
prompt = no # Don't prompt for values interactively
[req_distinguished_name] Section
C = US # Country
ST = State # State/Province
L = City # Locality/City
O = Organization # Organization
CN = localhost # Common Name
[v3_req] Section
basicConstraints = CA:FALSE # Not a Certificate Authority
keyUsage = nonRepudiation, digitalSignature, keyEncipherment # Allowed key uses
subjectAltName = @alt_names # Points to alt names section
[alt_names] Section
DNS.1 = localhost # DNS name the cert is valid for
IP.1 = 127.0.0.1 # IP address the cert is valid for
Make sure that you set all valid DNS names and IP addresses for where you want to use your server.
Signing server certificate
Current server certificate cannot be used with your CA because it is not yet signed. To sign it run the following command:
openssl x509 -req -days 825 -in server.csr \
-CA ca.crt -CAkey ca.key -CAcreateserial \
-out server.crt \
-extfile server.conf -extensions v3_req
Note that the same server.conf
is used for signing.
After signing you will be able to use server.crt
and server.key
in your tunnelize server by setting it in the
encryption part:
{
"server": {
// ... other fields
"encryption": {
"type": "tls",
"cert_path": "/path/to/server.crt",
"key_path": "/path/to/server.key"
}
}
}
Your tunnel encryption will need to be pointed to ca.crt
file:
{
"tunnel": {
// ... other fields
"encryption": {
"type": "tls",
"cert": "/path/to/ca.crt"
}
}
}
Setting up certificates using Let's Encrypt
Before starting, make sure that you have access to DNS zone for your domain and can change it.
First, install Certbot:
# For Ubuntu/Debian
sudo apt update
sudo apt install certbot
# For CentOS/RHEL
sudo dnf install epel-release
sudo dnf install certbot
Generate the wildcard certificate:
sudo certbot certonly --manual --preferred-challenges=dns --server https://acme-v02.api.letsencrypt.org/directory -d *.your-hostname.com
Replace your-hostname.com
with your domain.
Wait until you get propmpted by Certbot:
- You'll receive a TXT record value
- Create a DNS TXT record at your domain dns zone:
- Name/Host:
_acme-challenge
- Type: TXT
- Value: The string provided by Certbot
- TTL: Use lowest possible value (e.g., 60 seconds)
- Name/Host:
Once verified, press Enter in the Certbot prompt to complete the process.
Your certificates will be stored at (based on your domain name):
- Private key:
/etc/letsencrypt/live/your-hostname.com/privkey.pem
- Certificate:
/etc/letsencrypt/live/your-hostname.com/fullchain.pem
Important
Let's encrypt certificate usually lasts for 90 days, after which you will need to renew it by running the same generate command above and following the process. If your DNS zone has an API, you could automate this process with Certbot DNS plugins.
Server will need to be configured as:
{
"server": {
// ... other fields
"encryption": {
"type": "tls",
"cert_path": "/etc/letsencrypt/live/your-hostname.com/fullchain.pem",
"key_path": "/etc/letsencrypt/live/your-hostname.com/privkey.pem"
}
}
}
Tunnel will need to be configured as:
{
"server": {
// ... other fields
"encryption": {
"type": "native-tls",
}
}
}
In this case native-tls
is used to use your OS certificates because Let's encrypt Certificate Authority (CA) is
normally trusted by your operating system.
HTTP endpoint
HTTP endpoint is a listening point where the Tunnelize server listens for incoming HTTP requests. It allows clients to tunnel local HTTP traffic through the Tunnelize server.
Tunnels configured to forward HTTP traffic first connect to the server where they get assigned a domain to where a client can connect to through a browser to access the local HTTP server.
When a client first connects to the HTTP endpoint, server uses the Host
header
to decide to which tunnel it needs to connect to. After tunnel is found, a link is
established between client and tunnel and data is forwarded until either side closes the
connection.
Configuring endpoint
Default HTTP endpoint configuration looks like this:
{
"server":{
// ...other fields
"endpoints":{
"http-endpoint": {
"type": "http",
"port": 3457,
"hostname_template": "tunnel-{name}.localhost"
}
}
}
}
Fields:
Field | Description | Default Value |
---|---|---|
type | The type of the connection. Always http for http endpoint. | No default |
port | The port number for the connection | No default |
encryption | The type of encryption used to enable HTTPS. See configuring encryption. | No encryption |
address | The address for the connection to bind to. | 0.0.0.0 |
max_client_input_wait_secs | Maximum amount of seconds on how long to wait between start of TCP connection and first request being sent. | 300 |
hostname_template | Template for the hostname to use when generating a hostname. See configuring templates below. | No default |
full_url_template | Template for the full URL to use when returning it to the tunnel. See configuring templates below. | Automatic generation if not set. |
allow_custom_hostnames | Whether custom hostnames are allowed. See configuring templates below. | true |
require_authorization | Whether authorization is required. See configuring authorization below. | No authorization required |
Configuring templates
For HTTP endpoints you can set templates to define how an URL will will be generated for a tunnel. There are two templates you can set:
{
// ...other fields
"hostname_template": "tunnel-{name}.localhost",
"full_url_template": "http://{hostname}:{port}",
}
Setting hostname_template
is required and that template will let HTTP endpoint generate a random name for the tunnel
in the {name}
part or use a custom name if allow_custom_hostnames
is set.
When using allow_custom_hostnames
name defined, desired_name
defined in the tunnel proxy configuration for
HTTP tunnel will be used unless it is already taken. If its already taken, a similar name will be autogenerated.
Setting full_url_template
is useful if you are using tunnelize server behind something like an nginx or Apache, where
the HTTP endpoint port is not the same for client as it is the same for nginx or Apache. This will allow you to set the
template you wish server to return to the tunnel after registration.
Parameter {hostname}
will be replaced by name generated by hostname_template
and {port}
with the port in HTTP
endpoint (or you can omit this or set your own port if needed).
Configuring authorization
If you do not wish everyone to see your local tunnel while it is running, you can set an authorization where user needs to enter an username and password in the browser to access your tunnel.
Set this configuration for that:
{
"require_authorization": {
"realm": "exampleRealm",
"username": "user123",
"password": "pass123",
}
}
Here is the breakdown of the parameters:
Parameter | Description | Example Value |
---|---|---|
realm | A string that specifies the protection space. It is used to define the scope of protection for the browser. This field is not required. | "exampleRealm" |
username | The username required for authentication. | "user123" |
password | The password required for authentication. | "pass123" |
Working with existing HTTP server
If you are using a http server like Apache or nginx it is possible to make tunnelize work with it. See links below for your http server:
Working with nginx
If you are using nginx in your server it is possible to setup a tunnelize server to work together with nginx. In this case tunnelize server will use a HTTP endpoint and it will be proxied through the nginx server to the user.
Configuration without SSL
Important
Make sure your DNS zone support wildcard domains.
Configure your HTTP endpoint similar to this:
{
"type": "http",
"port": 3457,
"encryption": {
"type": "none"
},
"max_client_input_wait_secs": 10,
"hostname_template": "tunnel-{name}.your-hostname.com",
"allow_custom_hostnames": true,
}
Then create a virtual host in nginx like this:
server {
listen 80;
server_name ~^tunnel-(?<subdomain>\w+)\.your-hostname\.com$; # Set prefixed subdomain so that you can allow for any kind of tunnels
# Increase the client request timeout
client_body_timeout 60s;
client_header_timeout 60s;
# Increase proxy timeouts for connecting to the backend
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Keep connections alive for a longer time
keepalive_timeout 65s;
location / {
proxy_pass http://0.0.0.0:3457; # Set port to tunnelize server
# This is required for tunnelize to figure out where to route to.
proxy_set_header Host $host;
# Pass WebSocket headers only when the connection is upgrading
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# Other proxy settings (optional)
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_max_temp_file_size 0;
}
}
# This is mapping for websocket support
map $http_upgrade $connection_upgrade {
default "close";
websocket "upgrade";
}
Configuration with SSL
Important
Make sure your DNS zone support wildcard domains. Also make sure that you have a wildcard certificate setup.
Use the same configuration for nginx as above, but with following changes:
server {
# ...other settings
listen 443 ssl; # change listen to this
# Add SSL certificates
ssl_certificate /etc/letsencrypt/live/example.com-0001/fullchain.pem; # make sure this path matches to the certificate for certbot
ssl_certificate_key /etc/letsencrypt/live/example.com-0001/privkey.pem; # make sure this path matches to the certificate for certbot
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
#... other settings
}
Working with Apache
If you are using Apache in your server, it is possible to setup a tunnelize server to work together with Apache. In this case, tunnelize server will use a HTTP endpoint and it will be proxied through the Apache server to the user.
Configuration without SSL
Important
Make sure your DNS zone supports wildcard domains.
Configure your HTTP endpoint similar to this:
{
"type": "http",
"port": 3457,
"encryption": {
"type": "none"
},
"max_client_input_wait_secs": 10,
"hostname_template": "tunnel-{name}.your-hostname.com",
"allow_custom_hostnames": true,
}
Then create a virtual host configuration in Apache like this:
# Enable required modules
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
LoadModule rewrite_module modules/mod_rewrite.so
# Virtual Host configuration
<VirtualHost *:80>
# Use wildcard ServerName to match tunnel subdomains
ServerName tunnel-prefix.your-hostname.com
ServerAlias tunnel-*.your-hostname.com
# Set longer timeouts
TimeOut 60
ProxyTimeout 60
# Enable WebSocket proxy
RewriteEngine On
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?(.*) "ws://localhost:3457/$1" [P,L]
# Proxy configuration
ProxyPass / http://localhost:3457/
ProxyPassReverse / http://localhost:3457/
# Pass required headers
ProxyPreserveHost On
RequestHeader set X-Forwarded-Proto "http"
RequestHeader set X-Real-IP %{REMOTE_ADDR}s
RequestHeader set X-Forwarded-For %{REMOTE_ADDR}s
# Disable response buffering
SetEnv force-proxy-request-1.0 1
SetEnv proxy-nokeepalive 1
</VirtualHost>
Configuration with SSL
Important
Make sure your DNS zone supports wildcard domains. Also make sure that you have a wildcard certificate setup.
Use the same configuration as above, but modify the VirtualHost configuration to include SSL:
<VirtualHost *:443>
# Same ServerName and ServerAlias as above
ServerName tunnel-prefix.your-hostname.com
ServerAlias tunnel-*.your-hostname.com
# SSL Configuration
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/example.com-0001/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com-0001/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
# All other configuration remains the same as the non-SSL version
TimeOut 60
ProxyTimeout 60
RewriteEngine On
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?(.*) "wss://localhost:3457/$1" [P,L]
ProxyPass / http://localhost:3457/
ProxyPassReverse / http://localhost:3457/
ProxyPreserveHost On
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Real-IP %{REMOTE_ADDR}s
RequestHeader set X-Forwarded-For %{REMOTE_ADDR}s
SetEnv force-proxy-request-1.0 1
SetEnv proxy-nokeepalive 1
</VirtualHost>
Note the key differences from the nginx configuration:
- Apache requires explicit module loading for proxy and WebSocket support
- WebSocket proxying is handled through mod_rewrite rules rather than headers
- The header setting syntax is different but achieves the same result
- SSL configuration uses Apache's SSLEngine directives instead of nginx's ssl_ directives
Make sure all required Apache modules are enabled:
a2enmod proxy
a2enmod proxy_http
a2enmod proxy_wstunnel
a2enmod rewrite
a2enmod ssl # If using SSL
After making these changes, restart Apache to apply the configuration:
sudo systemctl restart apache2
TCP endpoint
TCP endpoint is a listener for TCP traffic. When this endpoint is started, it will listen to the client connections on a specified port range. When a client connects to a specific port, server will look for a conected tunnel on that port and if there is such a tunnel it will create a link between them and route data.
Configuration is setup as follows:
{
"server":{
// ...other fields
"endpoints":{
"tcp": {
"type": "tcp",
"address": null,
"allow_desired_port": true,
"reserve_ports_from": 4000,
"reserve_ports_to": 4050,
"encryption": {
"type": "none"
},
"full_hostname_template": "localhost:{port}"
}
}
}
}
Fields:
Key | Description | Default Value |
---|---|---|
type | The type of the endpoint, in this case, always TCP. | No default |
address | The address to bind to. | 0.0.0.0 |
allow_desired_port | Allows the use of a desired port if available. If not available, first available port will be chosen. | true |
reserve_ports_from | The starting port of the reserved range for this endpoint. | No default |
reserve_ports_to | The ending port of the reserved range range for this endpoint. | No default |
encryption | The type of TLS encryption used. See configuring encryption. | No encryption |
full_hostname_template | Template for the full hostname with port. See configuring templates below. | No default |
Configuring templates
For TCP endpoints you can set templates to define how an URL will will be generated for a tunnel. There are two templates you can set:
{
"server":{
// ...other fields
"endpoints":{
"tcp": {
// ...other fields
"full_hostname_template": "localhost:{port}"
}
}
}
}
Template you set here, will be returned by the server to the tunnel proxying the connection, to tell the user where their local server can be reached from.
Placeholder {port}
will be replaced by the port assigned to the tunnel.
UDP endpoint
UDP endpoint is a listener for UDP traffic. When this endpoint is started, it will listen to the client connections on a specified port range. When a client connects to a specific port, server will look for a connected tunnel on that port and if there is such a tunnel it will create a link between them and route data.
Configuration is setup as follows:
{
"server":{
// ...other fields
"endpoints":{
"udp": {
"type": "udp",
"address": null,
"allow_desired_port": true,
"reserve_ports_from": 4000,
"reserve_ports_to": 4050,
"full_hostname_template": "localhost:{port}"
}
}
}
}
Fields:
Key | Description | Default Value |
---|---|---|
type | The type of the endpoint, in this case, always UDP. | No default |
address | The address to bind to. | 0.0.0.0 |
inactivity_timeout | Amount of time in seconds for how long to wait (from last transmission) until UDP client is considered inactive. | 300 |
allow_desired_port | Allows the use of a desired port if available. If not available, first available port will be chosen. | true |
reserve_ports_from | The starting port of the reserved range for this endpoint. | No default |
reserve_ports_to | The ending port of the reserved range range for this endpoint. | No default |
full_hostname_template | Template for the full hostname with port. See configuring templates below. | No default |
Configuring templates
For UDP endpoints you can set templates to define how an URL will will be generated for a tunnel. There are two templates you can set:
{
"server":{
// ...other fields
"endpoints":{
"udp": {
// ...other fields
"full_hostname_template": "localhost:{port}"
}
}
}
}
Template you set here, will be returned by the server to the tunnel proxying the connection, to tell the user where their local server can be reached from.
Placeholder {port}
will be replaced by the port assigned to the tunnel.
Monitoring endpoint
Monitoring endpoint is an API endpoint which allows the user to manage the tunnelize server. It exposes a JSON API for managing tunnels, clients, links and monitoring system.
To setup a monitoring API configure endpoints like this:
{
"server":{
// ...other fields
"endpoints":{
"monitoring-endpoint": {
"type": "monitoring",
"port": 3000,
"encryption": {
"type": "none"
},
"address": null,
"authentication": {
"type": "basic",
"username": "admin",
"password": "changethispassword"
},
"allow_cors_origins": {
"type": "any"
}
}
}
}
}
Fields:
Key | Description | Default Value |
---|---|---|
type | Type of service. Always monitoring for monitoring endpoint. | No default |
port | Port number | No default |
encryption | Encryption for HTTPS access. See configuring encryption. | No encryption |
address | Service address. | 0.0.0.0 |
authentication | Type of authentication. See configuring authentication below. | No default |
allow_cors_origins | CORS origins allowed. See configuring CORS below. | any |
Configuring authentication
Authentication allows you to protect the monitoring endpoint from unauthorized acccess. It is important to set this on production hosting to disallow outside access if you are using monitoring endpoint as the unauthorized user can manage tunnel, client and link access.
There are two types of authorization you can set: basic and bearer.
Keep in mind that monitoring has bruteforce protection where user is kicked out for 5 minutes after 5 failed attempts.
Setting up basic authorization
Configuration will look like this:
{
"server":{
// ...other fields
"endpoints":{
"monitoring-endpoint": {
// ...other fields
"authentication": {
"type": "basic",
"username": "admin",
"password": "changethispassword"
},
}
}
}
}
This will setup a basic authorization method where browser will ask you to enter this username and password to access the endpoint.
Setting up bearer authorization
Bearer authorization is a more traditional token authorization as used in API requests. Your API client will send the
token in Authorization: Bearer <token>
header and if the token value is correct, tunnelize will grant access.
Configuration looks like this:
{
"server":{
// ...other fields
"endpoints":{
"monitoring-endpoint": {
// ...other fields
"authentication": {
"type": "bearer",
"token": "yourtoken",
},
}
}
}
}
Configuring CORS
CORS (Cross-Origin Resource Sharing) allows you to control which origins are permitted to access resources on your server. This is important for security, especially if your monitoring endpoint is accessed from web applications hosted on different domains.
You can configure CORS in the allow_cors_origins
field. There are three types of CORS configurations you can set: any
, none
, and list
.
Allow any origin
This configuration allows any origin to access the monitoring endpoint.
{
"server":{
// ...other fields
"endpoints":{
"monitoring-endpoint": {
// ...other fields
"allow_cors_origins": {
"type": "any"
}
}
}
}
}
Disallow all origins
This configuration disallows all origins from accessing the monitoring endpoint.
{
"server":{
// ...other fields
"endpoints":{
"monitoring-endpoint": {
// ...other fields
"allow_cors_origins": {
"type": "none"
}
}
}
}
}
Allow specific origins
This configuration allows only specified origins to access the monitoring endpoint. You need to provide a list of allowed origins.
{
"server":{
// ...other fields
"endpoints":{
"monitoring-endpoint": {
// ...other fields
"allow_cors_origins": {
"type": "list",
"origins": [
"https://example.com",
"https://anotherdomain.com"
]
}
}
}
}
}
Make sure to configure CORS according to your security requirements to prevent unauthorized access from untrusted origins.
API endpoints
Endpoint | Method | Description |
---|---|---|
/system/info | GET | Retrieves system information including CPU usage, memory, and uptime. |
/system/endpoints | GET | Lists all configured endpoints on the server. |
/system/endpoints/:name | GET | Retrieves information about a specific endpoint by name. |
/system/clients | GET | Lists all connected clients. |
/system/clients/:id | GET | Retrieves information about a specific client by ID. |
/tunnels | GET | Lists all active tunnels. |
/tunnels/:id | GET | Retrieves information about a specific tunnel by ID. |
/tunnels/:id | DELETE | Disconnects a specific tunnel by ID. |
/links | GET | Lists all active links. |
/links/:id | GET | Retrieves information about a specific link by ID. |
/links/:id | DELETE | Disconnects a specific link by ID. |
Setting up endpoint encryption
Encryption is available for HTTP, TCP and Monitoring endpoints.
See setting up certificates for information on how to setup certificates for server and tunnel.
There are two ways of setting the encryption, using a custom certificate or a servers own certificate.
Using main server's certificate
This is the simpler approach as it will use the Main tunnelize server's already predefined certificate defined in server's configuration. This allows you to use it directly for this endpoint without the need to specify it multiple times.
Important
If tunnelize server is not using encryption when tunneling data but this endpoint requires it, this will result in an error and server will not be able to run properly.
Configuration will look like:
{
// ...other fields
"encryption": {
"type": "tls",
},
}
Using a custom certificate
Using a custom certificate allows you to set a custom TLS certificate for an endpoint which may be different from tunnelize server's own certificate. This allows you to create multiple endpoints, each with its own certificate.
This is useful if you are serving something like HTTP endpoints which have a different wildcard certificate from the main server which is not using a wildcard.
Configuration will look like:
{
"encryption": {
"type": "tls",
"cert_path": "/path/to/server.crt",
"key_path": "/path/to/server.key"
}
}
Setting up a tunnel
Tunneling is the main purpose of tunnelize. It will allow you to tunnel any kind of local data packets from your local through the tunnelize server of your choice up to the desired client.
Initialization
To start tunneling, first initialize tunnel configuration. This can be done in two ways:
Initalizing using default config
This will create tunnelize.json
configuration file with default configuration you can use to setup your tunnels.
To do this run tunnelize init tunnel
Keep in mind that this requires you to already know the proper tunnelize server configuration.
Provisioning via server config
Tunnelize is able to connect to the server directly, pull in all correct configuration and create an example tunnel configuration you can directly use without having to have a full knowledge of the tunnelize server.
To do this run:
tunnelize init tunnel --server=my-tunnelize-server.com
Tunnelize will connect to the my-tunnelize-server.com
at default port 3456, download information and create a config you can use to
forward your local connections. If your server is using another port add it via :PORT (for example: my-tunnelize-server.com:5050
).
Use following options to handle other cases:
Option | Description | Example |
---|---|---|
--key | Specifies the tunnel key to use for authenticating with the server. | --key=my-tunnel-key |
--tls | Enables TLS for the connection to the server. | --tls |
--ca | Path to the custom CA (Certificate Authority) certificate file for TLS. If not specified, it will use CA certificates in current OS. | --cert=/path/to/ca.crt |
Configuring a tunnel manually
To configure the tunnel manually, create a tuhnelize.json
and configure it:
{
"tunnel": {
"name": "my-tunnel",
"server_address": "localhost",
"proxies": [
// ...proxy configuration
]
}
}
Fields:
Name | Description | Default Value |
---|---|---|
name | Name of the tunnel. Optional, helps identify the tunnel in monitoring. | Empty string |
server_address | Hostname or address to the main tunnelize server. | No default |
server_port | Port of the server | 3456 |
forward_connection_timeout_seconds | How much time to wait in seconds for first response from your local server before disconnecting. | 30 |
encryption | Type of encryption. See configuring encryption below. | No encryption |
tunnel_key | Key for the tunnel | No key specified |
monitor_key | Key for monitoring | No key specified |
proxies | Proxy configuration. See configuring proxies below. | No default |
Configuring Encryption
It can be one of the two types:
No encryption required
{
"type": "none"
}
In this case any tunnelize client can connect and pass data in unencrypted connection. This means that all data passed between tunnel and server is visible to a third party.
TLS encryption
{
"type": "tls"
}
TLS encryption will be used.
All available fields are:
| Name | Value | Default |
| ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------- |
| type | Type of encryption. Always tls
in this case. | No default |
| ca_path | Path to certificate authority (ca.crt
) certificate for self-signed certificate validation. If not set, OS native certificates will be used for checking. | No certificate authority. |
See setting up certificates for information on how to use certificate files.
Configuring proxies
Proxies decide what kind of traffic you want to tunnel. Keep in mind that if tunnelize server is not configured to tunnel a specific endpoint, you will not be able to tunnel that kind of traffic. Another thing to note, you will need to know names of the endpoints specified in the server in order to use them. If you are hosting the server yourself, that is easy to find out, but if you are using someone else's server, that might be a challenge, in which case you should provision the configuration via the server.
To setup a proxy, add a new value in proxies array:
{
"tunnel": {
"proxies": [
{
"endpoint_name": "http",
"address": "localhost",
"port": 8080,
"endpoint_config": {
// proxy specific endpoint settings
}
}
]
}
}
Fields: | Name | Description | Default Value | | --------------- | ----------------------------------------------------------------------------------------- | ------------- | | endpoint_name | The name of the endpoint of the same type this proxy will forward connections to. | No default | | address | The IP address of the server you want to forward connection from. | No default | | port | The port number of server you want to forward connection from. | No default | | endpoint_config | Proxy settings to pass to the endpoint. Must be valid values for the endpoint. See below. | No default |
Important
When defining an endpoint for a proxy, you must make sure that type of the proxy matches the type of the endpoint otherwise, your tunnel connection will be rejected.
You can set up following proxies:
- HTTP
- TCP
- UDP
Setting up HTTP
To setup endpoint config for HTTP, set the following JSON on the HTTP proxy:
{
"tunnel": {
// ...other fields
"proxies": [
{
// ...other fields for http proxy
"endpoint_config": {
"type": "http",
"desired_name": "desired-name"
}
}
]
}
}
Fields:
| Name | Description | Default Value |
| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- |
| type | Type of tunnel. For HTTP endpoint, always http. | No default |
| desired_name | Desired name, will be used in {name}
part in endpoint hostname template which will be assigned to this proxy if allowed and not already taken. Otherwise, it will be ignored. | No value |
Setting up TCP
To setup endpoint config for HTTP, set the following JSON on the HTTP proxy:
{
"tunnel": {
// ...other fields
"proxies": [
{
// ...other fields for http proxy
"endpoint_config": {
"type": "tcp",
"desired_port": 1234
}
}
]
}
}
Fields: | Name | Description | Default Value | | ------------ | ------------------------------------------------------------------------------------------------------------------ | ------------- | | type | Type of tunnel. For TCP endpoint, always tcp. | No default | | desired_port | Desired port which will be assigned to this proxy if allowed and not already taken. Otherwise, it will be ignored. | No value |
Setting up UDP
To setup endpoint config for HTTP, set the following JSON on the HTTP proxy:
{
"tunnel": {
// ...other fields
"proxies": [
{
// ...other fields for http proxy
"endpoint_config": {
"type": "udp",
"desired_port": 1234,
"bind_address": "0.0.0.0:0"
}
}
]
}
}
Fields: | Name | Description | Default Value | | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------- | | type | Type of tunnel. For UDP endpoint, always udp. | No default | | desired_port | Desired port which will be assigned to this proxy if allowed and not already taken. Otherwise, it will be ignored. | No value | | bind_address | Bind address and port which will be used to listen to the data from your local UDP server. If not set, random available port on addres 0.0.0.0 will be used. | 0.0.0.0:0 |
Monitoring
Monitoring in this project is designed to help you keep track of the system's performance and health. It allows you to observe various metrics and logs to ensure that everything is running smoothly and manage currently connected tunnels, links and clients.
Monitoring can be done through an api endpoint or through CLI commands. This section explains CLI commands.
Configuration
Before you can run the monitoring commands, you need to ensure that the server is properly configured. Ensure that the
appropriate authentication method is configured to allow access to the monitoring commands. You can set
the monitor_key
in the tunnelize.json
.
{
"tunnel": {
"monitor_key": "secretkey"
}
}
Running commands
Once the monitoring key is set for your tunnel, you can run the following monitoring commands:
Command | Description | Example |
---|---|---|
tunnelize monitor system-info | Retrieves system information including CPU usage, memory, and uptime | tunnelize monitor system-info |
tunnelize monitor list-endpoints | Lists all configured endpoints on the server | tunnelize monitor list-endpoints |
tunnelize monitor list-tunnels | Lists all active tunnels | tunnelize monitor list-tunnels |
tunnelize monitor get-tunnel tunnel_id | Retrieves information about a specific tunnel by ID | tunnelize monitor get-tunnel 123e4567-e89b-12d3-a456-426614174000 |
tunnelize monitor disconnect-tunnel tunnel_id | Disconnects a specific tunnel by ID | tunnelize monitor disconnect-tunnel 123e4567-e89b-12d3-a456-426614174001 |
tunnelize monitor list-clients | Lists all connected clients | tunnelize monitor list-clients |
tunnelize monitor get-client client_id | Retrieves information about a specific client by ID | tunnelize monitor get-client 123e4567-e89b-12d3-a456-426614174002 |
tunnelize monitor list-links | Lists all active links | tunnelize monitor list-links |
tunnelize monitor get-link link_id | Retrieves information about a specific link by ID | tunnelize monitor get-link 123e4567-e89b-12d3-a456-426614174003 |
tunnelize monitor disconnect-link link_id | Disconnects a specific link by ID | tunnelize monitor disconnect-link 123e4567-e89b-12d3-a456-426614174004 |
Note that response from all of the commands is JSON meaning it can be piped for further processing.
Command reference
tunnelize
command reference can be also shown by running tunnelize help
.
Command | Subcommand | Arguments | Description |
---|---|---|---|
init | all | - | Initialize tunnelize.json for both tunnel and server with example configuration. |
init | tunnel | -s, --server <SERVER> | Initialize tunnelize.json for tunnel. If -s, --server is passed it will connect to tunnelize server to pull in config. |
-t, --tls | Use TLS to connect to server | ||
-c, --cert <CERT> | Path to custom CA certificate file for TLS | ||
-k, --key <KEY> | Tunnel key for server authentication | ||
init | server | - | Initialize tunnelize.json for server configuration |
server | -c, --config <CONFIG> | Starts tunnelize server using tunnelize.json from current directory. | |
tunnel | -c, --config <CONFIG> | Starts tunnelize tunnel using tunnelize.json from current directory. | |
-v, --verbose | Show detailed output for tunnel connection | ||
monitor | system-info | -c, --config <CONFIG> | Display system information. |
monitor | list-endpoints | -c, --config <CONFIG> | List all endpoints. |
monitor | list-tunnels | -c, --config <CONFIG> | List all tunnels. |
monitor | get-tunnel | -c, --config <CONFIG> | Get tunnel information by UUID. |
monitor | disconnect-tunnel | -c, --config <CONFIG> | Disconnect tunnel by UUID. |
monitor | list-clients | -c, --config <CONFIG> | List all clients. |
monitor | get-client | -c, --config <CONFIG> | Get client information by UUID. |
monitor | list-links | -c, --config <CONFIG> | List all links. |
monitor | get-link | -c, --config <CONFIG> | Get link information by UUID. |
monitor | disconnect-link | -c, --config <CONFIG> | Disconnect link by UUID. |
On commands using -c, --config
, if it is passed, it will load in that config json file, otherwise it will load tunnelize.json
from current working directory.
Environment
Tunnelize supports setting environment variables to modify the log output. Following environment variables can be set:
Name | Description | Possible Values | Default Value |
---|---|---|---|
LOG_LEVEL | Sets the logging level for the application | error , warn , info , debug | info |
LOG_COLORS | Enables or disables colored log output | true , false | true |