Wednesday, July 30, 2014

Vote for (or Suggest) Ideas

We just setup a feedback forum at http://qsandbox.uservoice.com 

Please, help us prioritize what to implement next.
The ideas with higher votes will get implemented first ..unless somebody bribes us ;)

Please, remember there are no stupid ideas.
The more you suggest the better.

Happy Sanboxing :)
qSandbox Team

Tuesday, July 29, 2014

View Error Logs & Table Prefix is Shorter

We are happy to announce another set of site improvements both internal and external.
  • Implemented view error logs
  • Reduced the db table prefix. Now it uses domain's ID instead of sanitized site name. There were issues with WooCommerce db tables : they couldn't be created because they were too long.
  • Added icons to the navigation
  • Fixed the incorrect disk space limits calculation. The allowed disk size was shown in GB instead of MB.
  • Also the disk space wasn't shown correctly 1.8 GB was shown as 1 G (bug in formatFileSize)
  • There were error when users used Join links

Thursday, July 24, 2014

Site Improvements

We've updated the message that was advertising our services and it doesn't occupy that much space.


Also for the premium users they will see just qSandbox in the bottom of the screen.


qSandbox was down for some time

Hi all,

Our users have notified us that the site was down. It's been back for several hours but we wanted to post a blog about that.
We apologize for the downtime which may have to do with this:

---------- Forwarded message ----------
From: DigitalOcean <support@support****>
Date: Thu, Jul 24, 2014 at 4:08 AM
Subject: DigitalOcean NYC2 SLA Credit and Outage Explanation
To: *********

Hi, I would like to take a moment to apologize for the problems you may have experienced accessing your droplets in the NYC2 region July 21st, starting around 6PM Eastern time. Providing a stable infrastructure for all customers is our number one priority, and whenever we fall short we work to understand the problem and take steps to reduce the chance of it happening again.
In this case, we've determined what were a few related events which contributed to the outage:
First, we had a problematic optical module in one of our switches that was sending malformed packets to one of the core switches in our network. Under normal circumstances, losing connectivity to a single core switch should not be problematic since each cabinet in our datacenter is connected to multiple upstream switches. In this case, however, the invalid data caused problems with the upstream core switch.
When the core switch received the invalid packet, it triggered a bug in the software on the core switch which caused some internal processes that are related to learning new network addresses to crash. Some of the downstream switches interpreted this condition in a way that caused them to stop forwarding traffic until the link to the affected core switch was manually disabled.
Once traffic forwarding was restored to the core switches, they were flooded with a large volume of MAC address information. Our network is built to be able to handle a complete failure of half of its core switches, however the volume of address updates as a number of cabinets simultaneously cycled between up and down triggered built-in denial of service protection features. This protection caused the core switches to be unable to correctly learn new address information, ultimately leading to connectivity problems to some servers.
Our network vendor has been engaged, and we've been working together to attempt to fully understand the scope of the problem and steps that we can take to address it. Concretely, we've begun evaluating some software updates that we believe may improve the situation. If we determine, as we hope, that these changes will improve stability in this type of situation we will build a plan to upgrade our core network to this version as soon as possible. In addition, we continue to look for additional configuration changes that we can make in the mean time to help prevent this type of problem.
DigitalOcean's top priority is to ensure your droplets are running 24 hours a day, 7 days a week, 365 days a year.  We've taken the first steps to fully understand this outage and have begun making changes to greatly reduce the likelihood of a similar event in the future. This work is ongoing and we will continue to make changes and validate our infrastructure to ensure that it behaves as expected in adverse conditions.
We will issue an SLA credit for the downtime you have experienced. We realize this doesn't make up for the interruption but we want to uphold our promise to our users when we fall short.
Thank you for your patience throughout this process. We look forward to continuing to provide you with the highest possible level of service.
Mark Imbriaco
VP, Technical Operations
DigitalOcean

Sunday, July 20, 2014

Numerous Improvements: Automatic Upgrade

We are happy to announce so many fixes and improvements to the qSandbox site.

The most important one is that the upgrades will happen automatically no need to wait for us to upgrade your account.

We've tweaked the Dashboard page so it's more compact.

Happy Sandboxing :)
qSandbox Team

qSandbox blog is up and running!

We are happy to announce the qSandbox blog.
It was about time to launch it.