Fitbit Intraday Heart Rate Tracking (with Code)

I recently upgraded to Fitbit Alta HR. It is much better than Fitbit Flex that I had been using for past many years. And the best part is the heart rate tracking, which also helps in getting better Sleep Quality outputs.

Fitbit Alta HR captures Heart Rate periodically (at few second intervals)

What was missing?

However, one thing that I missed with Alta HR and the Fitbit App was the display of the Heart Rate data. For instance, below is my Heart Rate during an Emotionally Charged Up Environment at Workplace. Note, I was sitting with No Physical Activity!

Heartbeat - Emotionally Charged

Heart Rate Variation while sitting – but with Emotions Running High!

If you notice, the Fitbit App shows Heart Rate data in 5-minute intervals. And the self-quantification person that I am, that was not enough. I wanted the entire data that Alta HR recorded. Thus, I began working on getting that reliably.

 

The End Result, & How You Can Use It

I have created a web version where you can review your Heart Rate data quickly.

Get your Heart Rate Chart at https://exain.com/fitbit

Remember to see the Tutorial on how to retrieve your “Client ID” from the Fitbit website.

Note: The application I have created saves the heart rate data in a database. It means that I will have your heart rate data. However, I am not asking for or collecting any user information. So despite the fact that I have the heart rate data, I cannot link it with an individual.

I have open sourced the entire code, and it is available on GitHub.

https://github.com/technotablet/fitbit

How it works

  • Fitbit allows developers to connect to its API after authentication through the OAuth2 Protocol.
  • Since the heart rate data is personal to a user, Fitbit does not permit third party developers to access heart rate data of another user.
  • You will need to authenticate yourself on the Fitbit Developer Portal and create an “App” on it. It is a relatively simple process, and if you are connecting to my service at https://exain.com/fitbit, then you can view the tutorial on YouTube.

Now Go – Track your HB!

Port Forwarding in AWS LightSail or EC2 machines via SSH

I have a Smart Lighting system at home powered by Philips Hue. I was trying to connect to my Philips Hue Bridge’s IP remotely without implementing Port Forwarding on my WiFi Router.

Instead of setting up an EC2 instance, I moved ahead with a Lightsail instance, which unlike EC2, is much less complicated, and also provides the download of private key, the firewall changes etc. upfront for easy and convenient access.

Disclaimer: The process I mention below may not be optimum if you are opening up sensitive/unprotected ports without appropriate security measures. Use your own judgement before you implement Port Forwarding.

Following is an example of what I planned to do. Basically, I wanted to access Port 9090 on my Lightsail instance to reach the Philips Hue Bridge at my home.

Port Forwarding Setup using AWS Lightsail/EC2

  • I had opened Port 9090 through the Firewall option in Lightsail
  • I also had set a password for root user by using the command sudo passwd

However, the port forwarding did not work because Lightsail’s SSH does not support port forwarding by default.

I made the following changes in /etc/ssh/sshd_config to enable port forwarding.

# Changed the following line
PermitRootLogin yes

# Added at the bottom the following
UseDNS no

ClientAliveInterval 180
ClientAliveCountMax 3

GatewayPorts yes

Then I restarted ssh using root

/etc/init.d/ssh restart

After that I was able to do the port forwarding smoothly by executing the following command on my Desktop at home (your needs may vary, so modify accordingly)

ssh -i key.pem -R *:9090:192.168.0.75:80 root@101.102.103.104

Now from a remote machine, if I reach out to Port 9090 on 101.102.103.104, it works well. The command man ssh will help you understand the -L (Local Forward to Remote) & -R (Remote Forward to Local) option better. You can also use PuTTY to implement Port Forwarding.

Amazon RDS Multi-AZ Setup Failover Simulation

I had setup an Amazon RDS MySQL instance with Multi-AZ option turned on. However, I couldn’t test if the Multi-AZ setup was working as expected. Thus, I prepared the test cases below to simulate a downtime and verify if the Failover worked and the servers switched places.

I am assuming you have already setup a Multi-AZ RDS instance for MySQL. If not, check out http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html

How will we test it?

  1. Identify the two servers that AWS allocates to us (Primary & Secondary)
  2. Start adding data/load test one of the servers and do a reboot of that server to simulate a downtime.
  3. Review if the switchover happened, and the data consistency.

Base Setup

  1. Multi-AZ MySQL installation (db.t2.micro) in Mumbai Location (AP-South-1)
  2. Ubuntu EC2 Instance (t2.nano) in Mumbai Location (AP-South-1)
  3. Security Group Changes to allow access to the incoming port 3306 from the internal IP address of the EC2 instance.
AWSRDS-Security Group

Security Group Settings • 172.31.28.190 is the IP Address of the EC2 instance


Determine Primary & Secondary Zone IPs for your RDS instance

In Amazon RDS with a Multi-AZ setup, there are two availability zones within a location. Primary Availability Zone (referred as Availability Zone) & Secondary Availability Zone (Referred as Secondary Zone).

The purpose of Multi-AZ setup is that your database setup is running on an automatic failover environment with a realtime replicated standby server. In case the primary server goes down, the secondary server can elevate itself to primary and continue the services. Refer https://aws.amazon.com/rds/faqs/#129 and https://aws.amazon.com/rds/details/multi-az/

AWSRDS-RDS Details

Endpoints & Availability Zones in RDS

Amazon RDS provides you with an Endpoint, which is a domain that you use as your hostname, and connect to your MySQL instance on Port 3306.

The Endpoint is a DNS CNAME that points at a time to one of the two instances available in the different availability zones (Primary & Secondary) with a TTL of 5 Seconds. The first step that we will follow would be to determine what are these two instances, and whether the availability zone has changed successfully.

Note that the Availability Zone currently is ap-south-1b (as in the screenshot above), and the Secondary Zone is ap-south-1a.

For our reference, we’ll use testrds.cdjw6bxi4s1f.ap-south-1.rds.amazonaws.com as the Endpoint that we have. Yours will of course vary.

On your EC2 terminal, run the following

while true; do host testrds.cdjw6bxi4s1f.ap-south-1.rds.amazonaws.com | grep alias ; sleep 1; done

(you can exit using CTRL+C at any point of time)

The above script will continue to check via DNS the pointer to testrds.cdjw6bxi4s1f.ap-south-1.rds.amazonaws.com Endpoint. The result will be something like the following

testrds.cdjw6bxi4s1f.ap-south-1.rds.amazonaws.com is an alias for ec2-13-126-202-244.ap-south-1.compute.amazonaws.com.

Note the alias name ec2-13-126-202-244.ap-south-1.compute.amazonaws.com.

It is the server that is running MySQL instance, and your scripts will connect to eventually. It is the MySQL server for ap-south-1b zone assigned to your instance.

Now let’s simulate a scenario through reboot where we will make the Secondary Zone the Primary.

  • From your AWS console, select the DB Instance, and under Instance Options, select “Reboot”.
AWSRDS-Reboot Instance

Rebooting RDS DB Instance

  • Under the Reboot options, select the option “Reboot With Failover?”, and click Reboot.
AWSRDS-Reboot Option

Reboot With Failover option

  • Continue to monitor the terminal where you were checking the domain name pointing for your Endpoint.
AWSRDS-RDS Endpoint DNS Change

Endpoint’s DNS Alias Changes on Server Switching

It takes <60 seconds for the Endpoint DNS information to change. You will be able to see a new domain name that your endpoint is now pointing towards.

testrds.cdjw6bxi4s1f.ap-south-1.rds.amazonaws.com is an alias for ec2-13-126-190-48.ap-south-1.compute.amazonaws.com.

You can refresh the AWS console. It takes within a few seconds to <10 minutes to see the updated Availability Zone information on the AWS console.

AWSRDS-RDS Details Zone Change

Availability Zone Switchover successful

If you notice, the Availability Zone has now become ap-south-1a (instead of 1b), and Secondary Zone is now ap-south-1b (instead of 1a). Hence the servers have interchanged, and now you can connect only to the Primary Server.

Results for the above setup (your information will vary):

  • ap-south-1a is pointing to ec2-13-126-190-48.ap-south-1.compute.amazonaws.com.
  • ap-south-1b is pointing to ec2-13-126-202-244.ap-south-1.compute.amazonaws.com.

Note: You can only connect to one of the servers at a time, and that is the Primary Availability Zone server.


Testing Multi-AZ Failover

Referring to https://aws.amazon.com/rds/details/multi-az/, the Multi-AZ failover mode works in a synchronous Master/Slave relationship. There are two servers running simultaneously, the Primary one is accessible to end user, and the data is replicated in real time to a Secondary server (residing in a different zone), which is not accessible to the end user.

In case of a Primary Server’s unavailability, the Secondary Zone’s server is elevated to be the Primary, and hence accessible to the end user and application.

Test Case 1 – We’ll keep on connecting to the database and inserting one record every time in the database. The purpose is to check how much time does it take for the failover to happen. Basically Primary Availability Zone will become Secondary Availability Zone, and vice versa.

Test Case 2 – We’ll connect to the Primary Zone’s Instance directly (instead of using the Endpoint provided) and start adding the data through multiple clients. While the data is being added, we’ll reboot the machine with Failover mode on. It will make the Primary Zone secondary and inaccessible, and the Secondary Zone will now be made Primary. We will then verify the data insertion that we did on the now Secondary server, and if all records are available on the now Primary server.

I will use basic PHP scripting to test out the Failover capacity and if the data is replicated correctly. You can replicate it in any other language that you prefer.

  • Install PHP & MySQL client
apt-get install mysql-client-core-5.6 php5-cli php5-mysql
  • Connect to MySQL, and create a table in the MySQL Database (please replace the values based on your environment)
 mysql -h testrds.cdjw6bxi4s1f.ap-south-1.rds.amazonaws.com -u vivek -p FirstRDSdb
CREATE TABLE `failover_test` (
 `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
 `cycle` varchar(50) DEFAULT NULL,
 `counter` int(10) unsigned NOT NULL,
 `failover_date` datetime NOT NULL,
 PRIMARY KEY (`id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 COMMENT='AWS RDS Failover Testing';

 

Test Case 1 Implementation

Create a PHP script named failover_test.php with the following content

<?php

$host = "testrds.cdjw6bxi4s1f.ap-south-1.rds.amazonaws.com"; // AWS Endpoint
$user = "vivek";
$password = "password";
$dbname = "FirstRDSdb";

if (!isset($argv[1])) {
 exit("Provide a cycle name\n");
}
if (!isset($argv[2])) {
 exit("Provide the id value (that will be coming from for loop)\n");
}
$cycle = $argv[1];
$count = $argv[2];


$conn = mysqli_connect("$host","$user","$password","$dbname");
$q = mysqli_query($conn, "insert into failover_test set cycle='$cycle',counter='$count',failover_date=now() ");

if (!$q) {
 $date = date("Y-m-d H:i:s");
 echo "\n----------- NOT INSERTED $count / $date --------------\n";
} else {
 echo "$count.";
}

mysqli_close($conn);
?>

Action Plan

  1. We will execute the above PHP script in a loop.
  2. While the terminal is executing the script, we will reboot the database with the option ‘Reboot with Failover?‘.
  3. We will monitor the PHP script and notice any numbers that are missing. The count of the missing numbers will give you the total downtime in seconds (approximately).

On your EC2 machine, from the same location where you saved your PHP script, run the following bash command from the terminal

for i in {1..5000}; do timeout 1 php failover_test.php cycle0 $i ; done

(You can use CTRL+C to terminate if you see any errors, or once your work/testing is over)

The above script does entry from 1 to 5000 (or increase it if the number is getting exhausted before you are able to do the testing) in the database. The “timeout” command is there to ensure that if there is no response for 1 second, the script will timeout & exit.

Now move on to the AWS console, and reboot the database instance with the option “Reboot With Failover?” selected.

AWSRDS-Reboot Option

Reboot with Failover option

Continue to monitor the script that is being executed.

AWSRDS-Test1 Downtime

Calculating the Downtime. Use CTRL+C to end the script execution.

Note the duration where the data insertion pauses and no numbers are displayed. It means the Primary Zone’s server has shutdown, and your EC2 instance cannot connect to any RDS server. Once the numbers start showing up again after the delay, it refers to the server in the Secondary Zone now made primary. Note the total missing numbers; their count will tell you about the approximate seconds of total downtime that you faced.

 

Test Case 2

In this test, we will connect with only the Primary instance and flood it with data. We will then do a reboot and make the Secondary instance Primary. The aim is to test whether the data that was saved in Primary instance is correctly replicated to the Secondary.

Create a new PHP Script failover_load.php

<?php

// You need to get the relevant servers for your testing through
// monitoring DNS changes as I mentioned in the document above
$zoneA = "ec2-13-126-190-48.ap-south-1.compute.amazonaws.com.";
$zoneB = "ec2-13-126-202-244.ap-south-1.compute.amazonaws.com.";

// FOLLOWING IS VERY IMPORTANT
// Select the zone that is currently primary - so that your script can connect to it
// You can get this information from the AWS Console for your DB Instance
$host = $zoneB;
$user = "vivek";
$password = "password";
$dbname = "FirstRDSdb";

if (!isset($argv[1])) {
 exit("Provide a cycle name\n");
}
$cycle = $argv[1];

$link = mysqli_init();

$conn = mysqli_connect("$host","$user","$password","$dbname");

for($count=1;$count<1000000;$count++) {

 $q = mysqli_query($conn, "insert into failover_test set cycle='$cycle',counter='$count',failover_date=now()");
 echo "$count.";

}

?>

Action Plan

  1. We will open 5 terminal windows and execute the above script 5 times, with different cycle names for differentiation
  2. While the scripts are being executed, we will reboot the Database instance with “Reboot with Failover?” option checked.
  3. Once the scripts stop adding further data, we’ll take the max numbers entered, and match it with the database records.

Open 5 terminal windows and connect to your EC2 instance, and prepare the execution of the failover_load.php script with different cycle names (just for identification).

AWSRDS-Prepare RDS Load Test

5 separate terminal windows, with different cycle names. Prepared for execution.

Now, execute one by one each of the commands. The faster you do, the better. While the data entry is being done, you can visit the AWS console, and reboot the DB instance with the option “Reboot with Failover?” option selected.

AWSRDS-Reboot Option

Reboot with Failover option

Why did we do this?

The purpose is to add data rapidly in the RDS database, and while the data is being written, we’ll reboot the database instance and make the ‘Secondary Zone’ the ‘Primary’. Since we connected directly to the RDS instance in the Primary Zone (ec2-13-126-202-244.ap-south-1.compute.amazonaws.com.) instead of using the default AWS provided endpoint (testrds.cdjw6bxi4s1f.ap-south-1.rds.amazonaws.com), as soon as the reboot is done, the server that we are inserting data in will stop responding.

You can see by referring to the screen below, the insertions stopped at the following numbers for each cycle

  • cycle1 – 681
  • cycle2 – 635
  • cycle3 – 571
  • cycle4 – 529
  • cycle5 – 490
AWSRDS-Load Test Cycle1

For Cycle1 – 681

AWSRDS-Load Test Cycle2

For Cycle2 – 635

AWSRDS-Load Test Cycle3

For Cycle3 – 571

AWSRDS-Load Test Cycle4

For Cycle4 – 529

AWSRDS-Load Test Cycle5

For Cycle5 – 490

 

 

We have already rebooted the database server, and now we have a new Primary Server.
We will connect to it using the mysql client from our EC2 machine, and run the following queries

ubuntu@ip-172-31-28-190:~$ mysql -h ec2-13-126-190-48.ap-south-1.compute.amazonaws.com. -u vivek FirstRDSdb -p
mysql>
mysql> select max(counter ) from failover_test where cycle='cycle1';
 +---------------+
 | max(counter ) |
 +---------------+
 | 681 |
 +---------------+
 1 row in set (0.01 sec)

mysql> select max(counter ) from failover_test where cycle='cycle2';
 +---------------+
 | max(counter ) |
 +---------------+
 | 635 |
 +---------------+
 1 row in set (0.01 sec)

mysql> select max(counter ) from failover_test where cycle='cycle3';
 +---------------+
 | max(counter ) |
 +---------------+
 | 571 |
 +---------------+
 1 row in set (0.01 sec)

mysql> select max(counter ) from failover_test where cycle='cycle4';
 +---------------+
 | max(counter ) |
 +---------------+
 | 529 |
 +---------------+
 1 row in set (0.01 sec)

mysql> select max(counter ) from failover_test where cycle='cycle5';
 +---------------+
 | max(counter ) |
 +---------------+
 | 490 |
 +---------------+
 1 row in set (0.01 sec)

If you notice, from the above, the relevant counter numbers match from what we saw while we were adding the data using the failover_load.php script.

What we can infer from the test results above is:

  • MySQL does synchronous Primary/Slave replication
  • If Primary Server goes down, the Secondary Server is made primary, and all the data available on Primary is replicated on Secondary and the services can continue to operate
  • You can connect only to the Primary Server at a time

 

I believe the above is a fairly good implementation and we are able to simulate the failover setup. However, since the system is doing a clean reboot, the data is synchronised properly. A better test would have been to abruptly shutdown the database (due to hardware failure), and review how reliably and swiftly it did the failover.

7 Reasons for UX #FAIL in Enterprise Software

It is nearly impossible to have a good UX consistently in an enterprise software.

That is a strong statement and I’d love to be proved wrong. I come from a background where I developed applications within an enterprise with a defined target audience. I also did software that was available publicly on the Internet. For more than 15 years now, I’ve seen all sides of the table by being both a developer and end user.

First, What works well for Non-Enterprise Software

When you create a software for general public consumption, following is what aids you:

  1. Clarity of Thought. You and your team have planned it out and immersed yourself in it. You know what needs to be done and are focused towards the goal.
  2. Passion. Not only the design & development team, but even stakeholders or subject matter experts are passionate about the software and what’s being built.
  3. Authority. It is for the general public, and you have the authority to take decisions which you think are in the best interest of everyone.
  4. Innovation. That’s the buzzword. You want to innovate and so does everyone else in your team.
  5. Focus. Your focus is to create an Awesome User Experience, period.
  6. Analytics. You measure the usage, and you know what’s working and what isn’t. That impacts the further development, and the evolution continues.

And thus you have a software which you are proud of, and you don’t hesitate to modify it because you know that incremental innovations would always enhance the user experience. Take any top rated app on the App Store, and you would know that this method is what made them successful: Build-Measure-Learn—Repeat

Enterprises? What a #FAIL

The same gleamy eyed UX expert, who built software for the masses and totally, totally inspired from the UX revolution happening around the globe, stumbles, and more often than not fails badly in an enterprise setup. The leading 7 causes are:

  1. Target Audience. This is considered a blessing – you know your users in detail, which systems they have access to, and intelligently design your solution around them. Wrong. You have a fundamental flaw: The audience that you are looking at are deeply uninterested in what you are trying to build. You shouldn’t expect any excitement from them except the key people.
  2. Burden Factor. Unlike an app that the users download from App Store with free will & interest, an enterprise software is forced upon them. How many of you would like to fill in your time sheets, or apply for leaves through a portal – every day?
  3. Resistance. Legacy application being upgraded? There would be resistance, everywhere. Bad karma for you again.
  4. Bottom Line. A good UX process not only requires significant time investment, but good money too. Not many organisations would want to invest in it as the immediate results are always intangible.
  5. Authority & Stake Holders. If the stake holders do not believe in UX and its importance, then they’ll just sign it off. No approval, and no go-ahead. Also, despite a large user base, they are usually the ones who are the owners and speak for others – and thus it is imperative for them to believe in it.
  6. Features First, UX Second. All that matters are the Features – measurable, tangible and objective; the UX is always subjective, and often confused with design and visualisation. The Target Audience also have their wishlist and thus the features reign supreme. This may not be a cause of concern when the software is built the first time – but future updates and iterations can twist it in a way that the UX focus would go for a toss.
  7. Training Costs. Any new software deployment or a visual change is accompanied with a training across the organisation. It leads to reduced productivity, increase in complaints and overall costs – thus most prefer to avoid it.

It is often too late when the lack of good UX is realised. The people change, and their replacements want to maintain the status quo and thus nothing really changes. And eventually only features are added by the developers, and visual design done by the designers based on what seems appropriate to the stakeholders.

Tackling the UX in Enterprise Software

All is not lost, but I’d admit it is not easy. The Enterprise UX 2016 (http://enterpriseux.net/) aims to create a better and sustainable environment, and I wish them luck. However, following are my few cents.

  1. Right People. It can make all the difference. Someone who can develop AND understands design AND understand interacting with customers and gauging their needs might be able to get it right.
  2. Compromise for Flexibility. Assume that the UX might change in the future, and more features put in the system. Thus the UX shouldn’t always be the holy-grail, but rather flexible to incorporate future features.
  3. Transition Plan. Create something like a ‘beta’ preview so that the early adopters can experiment and rollback to the stable version if it doesn’t meet the needs. All new users should be on the new interface, and the older ones slowly transitioned.
  4. Luck. That you get the right Stakeholders who believe in UX. I’ve been lucky on many occasions, and that has helped significantly.
  5. Complete Package. Always demonstrate the solution rendered well – including UX and design (and not just wireframes), and preferably with the right content. That creates much more impact, and can help out with getting the Stakeholders on board for UX.

Apparently, there are more who share the same sentiments. Uday Gajendar in his post Why I design enterprise UX, and you should too! is quite optimistic, and I guess that’s the way to be.

What has been your experience in creating an Enterprise Focused Software, and how much of a role has UX played in it?

Asterisk VOIP and pfSense IPSec VPN Clients

I had setup a pfSense 2.1 based IPSec VPN following the instructions at https://doc.pfsense.org/index.php/Mobile_IPsec_on_2.0 which worked well for my mobile devices and machines.

However, using a SIP based softphone over VPN connecting to my workplace’s Asterisk based VOIP setup never really worked properly. I dabbled in changing the subnet masks, changing asterisk settings, phone settings, NAT and many other things – all of which didn’t really work. The maximum I was able to achieve was calling up *43 which is the echo number and hear my own voice.

The reason for it to not work was that my VPN setup was having a different IP Address range (e.g. 192.168.10.x/24), and my LAN network was different (say 192.168.5.x/24). This is how the VPN is setup, but this allows one way communication – my VPN clients can reach the LAN, but LAN cannot reach the VPN clients. So, Asterisk server, while signaling worked, the media didn’t. So the ring was there, but no voice, since it was trying to send it back to 192.168.10.x series and my pfSense box wasn’t passing it through.

The simple solution was adding a firewall rule in LAN settings, and allowing the LAN subnet to pass traffic to the 192.168.10.x/24 network (Protocol: any, Ports: any). By default it is blocked. And THEN I could ping my VPN clients from LAN too which was the ideal setup, even for remote troubleshooting.

Tracking your Route through GPS – Getting Started

It was 14th February 2013, Valentine’s Day. Inspired by one Motorola commercial, I set out to walk in a ‘heart shaped’ route, as a Valentine Gift for my beloved wife. The commercial is available at http://youtu.be/iG2DRiQt1b0 and the end result was tracked on Sports Tracker, with me walking in a heart shaped route near Lotus Temple, Delhi, India.

Image

I don’t have a good navigation sense, and rely mostly on GPS/Google Maps/Map My India for my everyday needs. Thus, to accomplish this task I’d specially bought a professional compass to track the directions, and had planned to cover at least 2 kilometers walking/running (though eventually it was restricted to 100 meters, reasons I’ll not delve into).

But, the biggest gain (apart from the delighted beloved, and learning how to use a compass) was that I came across various techniques that are used across the globe to track/plan a route. It’s more awesome than I ever imagined!

Continue reading

Quest for Ultimate 3G Wireless Internet setup at home – Delhi, India

I had a deep desire to have a network setup at home which was wireless and flexible. I used to have 2 Wimax connections (Tata Wimax & Reliance Wimax), and one ADSL (MTNL) running 24×7. Unfortunately Tata Wimax shut down its services, and I disconnected Reliance Wimax connection. With the start of 3G services, I really wanted more flexibility and expected better uptime, and was ready to live with the disadvantages such as lack of Static IP, and significantly higher cost.

So, I set out to perform the longest duration and most expensive test I’ve ever done – 3 months and approximately Rs. 35000/- on equipment and prepaid recharges cost. The location was East Delhi, and duration was October 2012 to December 2012. The performance may change in future, so YMMV. The ingredients were

Two 3G Routers

Two 3G USB Dongles & One 3G USB Supporting Router

The DLink 456U I bought off from Ebay, and Micromax MMX400R, ZTE K3770-z, Huawei E1731, and ASUS RT-N66U from Nehru Place, Delhi.

Four Service Providers

These 4 service providers have license to provide 3G services in Delhi. I bought 4 Prepaid SIM Cards of each and activated whatever 3G service was needed and their plans.

Additionally also bought one Nokia 101 Dual SIM phone (Rs. 1500) to easily check the balance, validity etc. and send/receive sms messages and USSD codes.

First, The Verdict

Vodafone as service provider, unlocked Huawei E1731 and Asus RT-N66U are awesome, and I assume I’d be staying with Vodafone for a long time to come.

This was my first Vodafone connection and I was delighted to see how Vodafone performed. It’s slightly on the higher side from the cost perspective, but if you want to have serious net connectivity, then Vodafone would suit you well considering the fact that it gets you

  • Excellent Download and Upload speeds (unlike Airtel)
  • The connection doesn’t disconnect every few hours and thus the IP remains the same (unlike Reliance)
  • Connects in the first go and you don’t really have to wait (unlike MTNL)

Vodafone does not allow connectivity from outside to your setup, so you as such cannot do port forwarding easily, unless it is initiated from the client end (such as by creating a tunnel using ssh -R on Linux).

Having the perfect connectivity between the ASUS router and 3G USB dongle was the most difficult part, and Huawei one performed satisfactorily.

3G ISP Performance Comparison, Delhi, India - December 2012

Continue reading