Window End Time: Tuesday, June 16, 2:00am local time
Expected Impact: ~ 15 minute
Reason: Hardware upgrade to improve capacity and security.
Kansas City Reason For Outage Notice
At approximately 11:20 PM US-CST our Kansas City datacenter location lost power due to a hard short in the exterior electrical system which blew fuses for power leading to the data floor. It is believed that this electrical fault occurred due to water intrusion into the electrical subsystem.
Once the fault was identified and exterior weather conditions allowed, work began immediately to isolate the electrical short and repair the defective section of wiring. This took several hours to complete.
Power was subsequently restored to the data floor at approximately 1:00 PM US-CST, and all servers were successfully powered back on with no issues.
Customers affected included: all shared hosting customers, openvz based vps customers in kansas city
Ram Host affected systems included: ramhost official website, email, support/billing system, and all control panels. Due to this, outage updates were provided on Twitter at https://twitter.com/RAMHOST for the duration.
At this time, all services have been confirmed restored. If you are still experiencing an outage, please submit a support ticket or email us at support [at] ramhost.us
We are pleased to announce that we are going to be upgrading both the network and the server hardware at our Los Angeles datacenter facility. The owner of Ram Host will be at the Los Angeles datacenter personally supervising and handling this maintenance.
The maintenance window for this to occur is between Monday April 20th 2015 and Thursday April 23rd 2015.
During this maintenance procedure, we expect brief outages lasting no more than 1 hour. Servers will be power cycled, so expect your VPS's in this location to reboot at least once.
This maintenance will be done in stages one server at a time to minimize disruption.
UPDATE Tuesday 7 pm GMT-6: The most difficult and disruptive portion of this maintenance has now been completed. From this point forward no more server reboots or power cycles will be necessary, the only remaining things to do are network related. There may be a few brief network outages still to come, however.
Update Wednesday 11 am GMT-6: The network has now been completely transitioned over to new infrastructure. Everybody should immediately be seeing network performance improvements. We did end up having to reboot vz8/vz9 again because of an unexpected software configuration problem. As of right now, all maintenance is complete!
We are pleased to announce that the forums have been restored to full working order after having been temporarily taken down due to spam and other maintenance concerns.
Along with this, we have also upgraded the forum software itself, which includes a full refresh of how the forums look, and also updated how the news page on our website looks.
As most of you are probably aware, there was a very serious issue with OpenSSL recently that was of particular concern to web sites protected with HTTPS.
Many web sites protected with OpenSSL SSL/TLS HTTPS encryption were compromised.
We would like to assure our users that our secure websites were never vulnerable to this serious issue, as we are using OpenSSL 0.9.x from CentOS 5 to protect our secure websites, and the CP1 cPanel server is using CentOS 5 as well so it was not impacted either. Only more recent OpenSSL 1.x was vulnerable to the heartbleed problem.
More information on this security incident is available at http://heartbleed.com/
As some of you know, we have had a few outages recently that took us longer than they should have to be responded to.
This is to announce that to improve the monitoring of our servers, and improve our response to outages, two changes have been made.
First, some background. Our previous monitoring tested the status of each server we operate by simply pinging it. As we have come to discover, a host machine can be completely locked up and crashed and yet still respond to ping requests.
Due to this problem with monitoring uptime with ping, we have now changed out monitoring to instead rely on establishing a TCP connection to a daemon running on each server. We believe this will more accurately alert us when things go down and prevent our external monitoring reporting things as up when in fact things are down.
The second change we have made, is we are discontinuing our usage of Pingdom and are instead now using BinaryCanary for external monitoring of all our servers. We have updated our status page to show uptime percentages for each server and links to detailed information from BinaryCanary for each server.
We expect that both of these changes will improve our uptime and improve our response time to server outages and reduce false positives/negatives in our external monitoring.
This is public notice that effective today, the primary business office of RAM Host has moved to the following location:
721 Ritchey St
Gainesville, TX, 76240-3532
Our previous primary business office location was as follows:
513 E. De Fee St
Baytown, TX, 77520-5118
Ownership has not changed, and none of our operations at any of our worldwide locations are affected by this.
In other words, RAM Host customers will experience no change in service as a result of this.
We are going to proceed with the IP renumbering we previously announced for legacy services in Kansas City.
We are going to be renumbering services in the following IP block:
This includes everyone with IP's 220.127.116.11 through 18.104.22.168
Replacement IP's for this range will be from our 199.180.253.X block.
This IP range is going to cease functioning on February 28th, 2014.
All clients in this range have already been contacted and sent their new IP information. If you have services in these IP ranges and you have not received an email with your new IP addresses, check your support tickets as the information will be in a ticket on your account entitled "IP Renumbering Notice"
As of today RAM Host has now been in business for 5 years.
A big thank you to all our customers - without you it wouldn't have been possible.