EC2 instance using Amazon SQS queues


Hits: 4402  

Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable, hosted queue for storing messages. Amazon SQS can be used to applications that perform different tasks, without losing messages. Amazon SQS enables users to build an automated workflow.

Amazon Elastic Compute Cloud (EC2) is a web service that provides resizable compute capacity in the cloud. Amazon EC2 can be used for building applications that start small but can scale up rapidly as demand increases (Auto Scaling).

Amazon EC2 Features:

a) Increase or decrease capacity within minutes.
b) Make one, hundreds, or even thousands of server instances simultaneously.
c) Web Service API to control the scaling of instances depending on needs.
d) Pay only for what you use (Pay Per Use) pricing model.

SQS

Features of Amazon SQS:

a) Single Amazon SQS queue can be shared by multiple servers simultaneously.
b) Server that is processing a message can prevent other servers from processing the same message at the same time using temporarily “locking” a message. The server can specify the amount of time the message is locked. When the server is done processing the message, it should delete the message. If the server fails while processing the message, another server can get the message after the lockout period.

AWS-SQS-EC2-S3

Pipeline processing with Amazon SQS:

Pipeline processing with Amazon SQS

a) Flexibility: Large monolithic server can be divided into multiple smaller servers without impacting the current system.

b) Piecemeal upgrades: Individual sub-components can be taken offline / upgraded without bringing the entire system down.

c) Tolerance to failures: Amazon SQS isolates sub-components from each other so the failure of one component does not impact the rest.


<?php

require_once('sqs.client.php');

define('AWS_ACCESS_KEY_ID''<access key>');

define('AWS_SECRET_ACCESS_KEY''<secret key>');

define('SQS_ENDPOINT''http://queue.amazonaws.com');

define('SQS_TEST_QUEUE''SQS-Queue-SVNLabs');

define('SQS_TEST_MESSAGE''Welcome to SQS.');

try

{

   $q = new SQSClient(AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYSQS_ENDPOINT);

   // create Queue

   
$result $q->CreateQueue(SQS_TEST_QUEUE);

   

   
// list Queue

   
$result $q->ListQueues();

   // send message to Queue

   
$messageId $q->SendMessage(urlencode(SQS_TEST_MESSAGE));

   // receive message from Queue

   
$messages $q->ReceiveMessage();

}

catch(
Exception $e)

{

    echo 
'Exception occurred: '$e->getMessage(), "\n<br />\n";

}

?>    

SaaS built using a PaaS (Google App Engine) and using IaaS (Amazon EC2)


Hits: 5572  

SaaS = PaaS + IaaS

Historical Method - VaR Cloud App.png

Tools for Development, Testing and Implementation:
* Amazon Web Services (AWS)
* Google App Engine (GAE)
* Google Chart Libraries
* Eclipse IDE

VaR Cloud Presentation SVNLabs

References:
Google AppEngine: http://code.google.com/appengine/
Amazon EC2: http://aws.amazon.com/ec2/
Google Chart: http://code.google.com/apis/chart/

“A lamp does not speak. It introduces itself through it’s light. Achievers never expose themselves. But their achievements expose them..!!!”

JSP S3Upload


Hits: 5186  

JavaScript is good alternative to bypass AWS bucket policies 😉

<%@ include file="config.jsp" %>
<%@page import="java.util.Calendar"%>
<%@page import="java.util.Date"%>
<%@page contentType="text/html" pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"><head>
<title>S3 Upload - JSP Demo</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

<script src="sha1.js"></script>
<script src="webtoolkit.base64.js"></script>
<script src="script.js"></script>

<script>

function uploadS3()
{
var awsid = '<%=awsAccessKey %>';
var awskey = '<%=awsSecretKey %>';

var fileField = document.getElementById("file").value;

var policyText = '{"expiration": "2015-01-01T12:00:00.000Z","conditions": [{"bucket": "<%=bucket %>" },{"acl": "<%=acl %>" },["eq", "$key", "'+fileField+'"],["starts-with", "$Content-Type", "text/"],]}'; 

var policyBase64 = Base64.encode(policyText);

var signature = b64_hmac_sha1(awskey, policyBase64);

document.getElementById("policy").value = policyBase64;
document.getElementById("signature").value = signature;
document.getElementById("key").value = fileField;

//document.getElementById("postform").submit();

document.getElementById("result").innerHTML = '<a href="http://s3.amazonaws.com/<%=bucket %>/'+fileField+'">http://s3.amazonaws.com/<%=bucket %>/'+fileField+'</a>'; 

}

</script>

</head><body>

<strong>Uploading to Amazon S3</strong>

<div class="main">

<p>

<form id="postform" action="http://s3.amazonaws.com/<%=bucket %>" method="post" onsubmit="return uploadS3();" enctype="multipart/form-data">
<input type="hidden" name="key" id="key" value="" />
<input type="hidden" name="acl" id="acl" value="<%=acl %>" />
<input type="hidden" name="content-type" id="content-type" value="text/plain" />
<input type="hidden" name="AWSAccessKeyId" id="AWSAccessKeyId" value="<%=awsAccessKey %>" />
<input type="hidden" name="policy" id="policy" value="" />
<input type="hidden" name="signature" id="signature" value="" />
<input name="file" id="file" type="file" />
<input name="submit" value="Upload" type="submit" />
</form>

<div id="result"></div>

</p>
</div>

</body></html>
PHP based S3 Upload Tool: http://svnlabs.com/demo/s3/

You are great if you can find your faults, Greater if you can correct them, But greatest if you accept others with their faults.

How to Setup Elastic Load Balancing on AWS


Hits: 3178  

How to create subaccounts and share buckets using IAM and CloudBerry S3 Explorer


Hits: 3812  
Note: this post applies to CloudBerry Explorer 2.4.2 and later.
As always we are trying to stay on top of the new functionality offered by Amazon S3 to offer the most compelling Amazon S3 and CloudFront client on Windows platform.
A few weeks ago Amazon introduced Identity and Authentication Management (IAM) Service. It is a new exciting service that allows creating user accounts inside the master account and grant those account a set of permissions. CloudBerry Explorer PRO 2.4 already comes with full support for IAM service and you can learn more about that in our previous blog post.
In this blog post we will look into the very common scenario of creating a subaccount within the master account and granting it permissions to a creation bucket. This might be useful if you for instance work with freelancers and want them to be able to work with the content in their own bucket.

Creating a policy

Click Access Manager in the main menu to run IAM management tool from within CloudBerry Explorer.
image001

In the Access Manager click New User to open up a dialog. Name the user and click ok.
image003
The new user should show up on the list. Right click it and click Add Policy…
image005
Click New Statement and then <select actions> to choose the list of actions that your new users will be allowed to do. You can see below those the most common ones.
Click in: to specify the bucket name and the path. Make sure to add “/*” to the path to propagate the policy to the bucket content.
Click New Statement once again this time for the bucket itself. Choose S3:ListBucket for action and make sure that you don’t add “/*” at the end. This is because you are applying the statement to a bucket not to its contents.
You can optionally set a condition. In our example it is valid only till Nov, 1 2010. After that time the user will not have access to the resource.
Click Ok to create the policy.
Designer
Last but not least, you have to generate an access/ secret key pair for your new user. Click Generate Access Keys… in the user context menu. Copy the keys so that you can hand them over to the user later.
image009

Working as a User

Register an account for the newly created user in CloudBerry Explorer console. Use assess/ secret key created earlier.
Note: you can use CloudBerry Explorer freeware to register one bucket for IAM user. If you need to register more than one bucket you will have to turn to our pro version.
image011
Now, select the newly created account in the drop down list. If you look at the list of buckets it will be empty. This is because we have not granted the user a right to list all buckets. You have to add a bucket as an external bucket manually. Click a green button on the tool bar and type the bucket name manually.
image013
Now you can see the bucket in the console. You can copy, move, delete files, create folders, etc.
image015
As always we would be happy to hear your feedback and you are welcome to post a comment.

CloudBerry S3 Explorer is a Windows freeware product that helps managing Amazon S3 storage and CloudFront . You can download it at http://cloudberrylab.com/

CloudBerry S3 Explorer PRO is a Windows program that helps managing Amazon S3 storage and CloudFront . You can download it at http://pro.cloudberrylab.com/ It is priced at $39.99

Like our products? Please help us spread the word about them. Learn here how to do it.
Want to get CloudBerry Explorer PRO for FREE? Make a blog post about us!

Backup mysql database to amazon S3


Hits: 5400  

Below is the simple code to create sql script of database on Amazon EC2 server using “mysqldump”…

then upload this sql script to Amazon S3 bucket using command line S3 tool “s3cmd”…


<?php

$sqlbackup="/usr/bin/mysqldump -v -u root -h localhost -r /var/www/html/backup/".date("Y-m-d-H-i-s")."_db.sql -pdbusername  databasename 2>&1";

exec($sqlbackup$o);

echo 
implode("<br /> "$o);

$file "/var/www/html/backup/".date("Y-m-d-H-i-s")."_db.sql";

$bucket "s3bucketname";

exec("/usr/bin/s3cmd  put –acl-public –guess-mime-type  –config=/var/www/html/.s3cfg   ".$file."  s3://".$bucket."  2>&1"$o);

echo implode("<br /> "$o);

?>

0 */12 * * * env php -q /var/www/html/s3bkup/s3bkup.php > /dev/null 2>&1 (per 12 hours)

You can set above php file as scheduled task (cronjob) for automated backup on Amazon S3 Bucket, I mostly use CloudBerry Explorer for Amazon S3 PRO for managing Amazon S3 files 😉

Amazon EBS


Hits: 2555  

Amazon Elastic Block Storage (EBS)

We can use Amazon EBS just like as the CD/DVD/Pen Drives on our PC/Laptops Servers for backup or data transfer…
EBS can attach to an EC2 instance, we can use EBS to save work files in it.. for it we have to mount it in the instance after backup we can unmount it, and detach it.
We can use the volume afterward by mounting it in another instance but different instances at the same time can not use same EBS volume.

Starting an Instance

# ec2-describe-images -o self -o amazon | grep machine
# ec2-add-keypair gsg-keypair (save this keypair for connecting instance via SSH)
# chmod 600 id_rsa-gsg-keypair ; ls -l id_rsa-gsg-keypair
# ec2-run-instances ami-235fba4a -k gsg-keypair
# ec2-describe-instances i-ae0bf0c7

Authorize ports to connect remotely…
# ec2-authorize default -p 22
# ec2-authorize default -p 80

Connect to instance
# ssh -i id_rsa-gsg-keypair root@ec2-67-202-51-223.compute-1.amazonaws.com

Create the Volume
# ec2-create-volume –size 1 -z us-east-1c
Create this volume in same availability zone

# ec2-describe-volumes vol-4771e479

Attaching the Volume
# ec2-attach-volume vol-4771e479 -i i-ae0bf0c7 -d /dev/sda

Formatting the Volume
# ssh -i id_rsa-gsg-keypair root@ec2-67-202-51-223.compute-1.amazonaws.com
# ls /dev
# yes | mkfs -t ext3 /dev/sda

Mounting the Volume
# mkdir /mnt/svnlabs-data
# mount /dev/sdh /mnt/svnlabs-data

Put a file on the volume
# vi /mnt/svnlabs-data/svnlabs.txt (put content here)

Unmounting the Volume
# cd ~
# umount /mnt/svnlabs-data

Detach the Volume
# ec2-detach-volume vol-4771e479 -i i-ae0bf0c7 -d /dev/sda

As we attach this volume to other instance we will get our svnlabs-data folder to new instance…

svnlabs will post some new articles on “Amazon Web Services” 😉 subscribe to svnlabs feeds

Configure Amazon EC2


Hits: 2139  

1.Boot 2 linux servers on EC2
2.Assign elastic IP to each of them
3.Register a domain (eg svnlabs.com)
4.On the domain settings – create 2 host records – ns1.svnlabs.com and ns2.svnlabs.com and point each record to each of the elastic IPs.
5.On your 2 nameserver instances – create dns zone for ns1. and ns2 respectively
6.Make ns2 a slave of ns1 – you can if you wish add ns3… ns4.. etc etc but its not necessary unless your site is getting millions of users.
7.Boot another EC instance and install the Scalr application.
8.Create a user on ns1. called “named” that has permissions to update the dns zone records on ns1.
9.The DNS settings of the scalr application will refer to the nameservers ns1. with user “named” and password as set on ns1.
10.Your application for example will have the domain svnlabs.com.  Register this domain and set its nameservers to your ns1. and ns2 mentioned previously.
11.You need to first create a new zone file on ns1 for svnlabs.com
12.In Scalr when asked for the application domain name – simply enter svnlabs.com – and scalr will handle the rest.

****************************************************************

You will need to register a domain name with a domain registrar.  After registering, you will need to enter your NS records for the domain name.  The NS records should point to a Domain Name Server (DNS).  Most registrars require at least two DNS servers to eliminate a single point of failure.

Some registrars provide free DNS services.  If you choose such a registrar, you would need to add a CNAME record for your sub-domain and ask any DNS application support related questions to your registrar.

Alternatively, you can launch and configure your own DNS servers on Amazon EC2.  A popular choice for Linux based DNS servers is BIND: http://en.wikipedia.org/wiki/BIND

Another option is to outsource your DNS servers using a third-party provider, for example http://www.dyndns.com/.

http://groups.google.com/group/scalr-discuss/web/how-to-host-your-mx-on-google

Cost: http://bhopu.com/Tags/Amazon-EC2