Wednesday, December 18, 2019

Amazon Queus with Ballerina

Ballerina and it's Connectors

Ballerina is a general purpose programming language designed for system integration. Web Services and REST APIs can be integrated with Ballerina code. WSO2 Ballerina Integrator, supports many out of the box components, called Ballerina Connectors, to programatically integrate external services and APIs. For example, when you want to connect to a Salesforce API, you can import the Salesforce Connector module to the Ballerina code and call its methods to the relevant REST API provided by Salesforce.

Amazon Simple Queue Service (SQS)

Amazon SQS is a simple message queue API provided by Amazon. WSO2 Enterprise Integrator (WSO2 EI) provides Ballerina Amazon SQS Connector  to programatically interact with the REST API provided by Amazon SQS. Once you have created an account in Amazon SQS, you can get the credentials for SQS service, which can be given to the Ballerina Connector as a configuration file or as configuration parameters. Then you can perform queue creation, enqueue, dequeue and message-delete operations by calling the respective API methods provided by the Connector.

Using SQS Connector

In this blog I am going to discuss the simplest way to run a Ballerina code with Amazon SQS Connector in Ubuntu/Linux console to create a SQS queue and send a message into it. For more information on usage of Ballerina Integrator with SQS connector with VS Code Plugin please visit the relevant tutorial documentation.

Setting up Ballerina Environment


Go to the WSO2 Ballerina Integrator download page.
Download the WSO2 Ballerina Integrator with Download button and install it.
Check whether Ballerina is correctly installed in your machine by executing the following command.

$ ballerina -v

If the Ballerina integrator is correctly installed in your machine you will get the following output.

Ballerina 1.0.2
Language specification 2019R3

Start developing Ballerina Code


Go to the directory location you want to make the Ballerina code.

$ cd loc

Create a Ballerina file.

touch sqs.bal

Open the file with your preferred editor.

gedit sqs.bal &

Add the following content to start the coding.

import ballerina/log;
import wso2/amazonsqs;

public function main(string... args) {

}

Note how the Ballerina console logging module and Amazon SQS Connector module is imported into the code. The main function is the program entry point as many other languages. Anyway this code snippet will not yet build as these imports are not used in the code.

Defining SQS Configurations and Client


Now let's define the Amazon SQS Configuration object and the SQS Connector client object above the main method.

amazonsqs:Configuration configuration = {
    accessKey: "Access Key",
    secretKey: "Secret Access Key",
    region: "Region",
    accountNumber: ""
};

amazonsqs:Client sqsClient = new(configuration);

Replace the Access Key, Secret Access Key and the Region parameters with the credentials obtained from the Amazon SQS account creation stage. Then you can create a SQS queue manually and you can get the  parameter from the path of the queue path generated.

Create a Standard Queue in Amazon SQS


There are 2 types of queues defined in Amazon SQS, Standard and FIFO. In this example we are going to create a Amazon Standard Queue. In order to do that we invoke the sqsClient as follows by adding the following code in the main method.

string|error queueURL = sqsClient->createQueue("myNewQueue", {});

if (queueURL is string) {
    log:printInfo("Created queue URL: " + queueURL);
} else {
    log:printInfo("Error occurred while creating a queue");
}

If the queue creation process had encountered an error, the queueURL would become an error object and a string type otherwise. The string would be the queue URL as specified in the documentation

If the queue was created the queue URL would be printed something like https://sqs.us-east-2.amazonaws.com/613964236299/myNewQueue. Note the format of the URL.

https://<Region>.amazonaws.com/<Access_Key>/<Queue_Name>

Enqueue a Message to an SQS Queue


Once an SQS queue is created a message can be stored in the queue. Invoking the sendMessage in the connector would send a message into the queue. Note that the received queue context path /<Access_Key>/<Queue_Name> has to be used for accessing the queue.


amazonsqs:OutboundMessage|error response = sqsClient->sendMessage("Sample text message.", "/613964236299/myNewQueue", {});

if (response is amazonsqs:OutboundMessage) {
    log:printInfo("Sent message to SQS. MessageID: " + response.messageId);
}

If the above message sending got successful you would get a console output similar to the following.

Sent message to SQS. MessageID: 7e7511a4-68f6-4c94-98e7-2b1e30301a0b

The complete code used would look like following.

import ballerina/log;
import wso2/amazonsqs;

amazonsqs:Configuration configuration = {
    accessKey: "AKZAY3QCLPL7DE5YSNC3",
    secretKey: "r0RYhP0lputX6hiYvcB5VK7hiY+Id+rUI57b7Qjp",
    region: "us-east-2",
    accountNumber: "613964236299"
};

amazonsqs:Client sqsClient = new(configuration);

public function main(string... args) {
    string|error queueURL = sqsClient->createQueue("myNewQueue", {});

    if (queueURL is string) {
        log:printInfo("Created queue URL: " + queueURL);

amazonsqs:OutboundMessage|error response = sqsClient->sendMessage("Sample text message.", "/613964236299/myNewQueue", {});

if (response is amazonsqs:OutboundMessage) {
    log:printInfo("Sent message to SQS. MessageID: " + response.messageId);
}

    } else {
        log:printInfo("Error occurred while creating a queue");
    }

}



Friday, August 23, 2019

From ESB to Ballerina

Introduction to Conventional ESB

The main slogan for ESB was the ability to configure a process flow models in Enterprise mediation scenarios without using a programming language. As majority of integration requirement scenarios (e.g.: Content Based Routing, Message Transformation) can be modeled with a set of XML tags, it was expected to be used by lay people to write their own business logic without any programming skill. This positioning is common to almost all ESBs currently available in the market like Oracle ESB, IBM Integration Bus and Mulesoft ESB.

How it was like The Real Marriage with Conventional ESBs?


Though it was believed to be useful to configure a XML for mediation, the content to be written was verbose and lengthy to configure. Once a problem is detected in a XML configuration it took much time to fix, as the process is involved with reading documentation and needed some trial-and-error configurations to come up with the correct configuration. As more and more configurable parameters were added to each of the XML component, the readability and the configurability was reduced significantly with the maturity of the ESB. Then the question was why a simple programming code would not be able to replace the lengthy XML configurations.
Another trend came up with time was the Micro Service Architecture. As the services are to be spread across several lightweight containers, the conventional ESBs faced the challenge of deploying artifacts with less up-time and with a less memory footprint. With the maturity of ESBs the code base was large and was consist of multiple layers of architecture that added less value compared to the cost involved. It was difficult to fulfill the lightweight requirement of the Microservice world with XML based SOAP mediation where REST and JSON became the mainstream.

Birth of Ballerina

WSO2 was thinking hard how to address the above issues with existing ESBs. Some proposed solutions were to develop a Java API for integration scenarios, replacing the XML configurations. However they have found that the flexibility given by an API would not be sufficient for some scenarios. And the acquisition of Java by a commercial organization introduced some fear of its existence in future as an opensource language where an API would tightly couple the integration with Java language. Another aspect was the unavoidable performance implication due to its inherent garbage collection where the modern integration is supposed to be so faster than data processing in convention.
The decision was to develop a programming language focused on integration, supporting cloud native capabilities by nature. Inspired by the programming based paradigm used for integration by Apache Camel, Ballerina was born. First version of the Ballerina language is running on Java Virtual Machine (JVM) which is known as JBallerina. Ballerina syntax is converted to the Java byte code which is running on top of JVM. This addressed most of the issues related to unavailability of libraries for Ballerina language, as Java library APIs can be wrapped with Ballerina, while providing the expected development environment for integration developers. Anyway, the future of the Ballerina is not limited to the JVM dependency. In future the there will be a language called nBallerina which would run on a different libraries and runtimes supporting heterogeneous language libraries.

My Programming Experience with Ballerina


At the moment I am involved in developing a Ballerina based integrator for WSO2 Enterprise Integrator (WSO2 EI). It will be the next generation of WSO2 Enterprise Integration, facilitating users to develop integration scenarios with Ballerina language. As the first step we are developing a set of Ballerina Connectors which are analogous to WSO2 ESB Connectors. There we develop connectors with Ballerina language 1.0.0 alpha. I have seen the XML based mediation used in similar cases where the development with Ballerina is much intuitive relative to them. With Ballerina I could easily use my programming knowledge in Java, JavaScript and Python as a transferable skill. It was far easier than I expected except some issues I faced with the VSCode development environment. VSCode plugin for Ballerina is under heavy development at the moment which is expected to solve most of the usability issues in it. In conventional Integration platforms which use XML as the configuration language (e.g.: Mulesoft, IBM Integration) are highly prone to errors when it comes to error handling and incompatibility across different mediators/connectors as the issues are not interactively communicated to the developer. That highly increase the development time compared to developing a general purpose code. Ballerina has correctly addressed that deficiency by making the error handling mandatory by design and providing code completion suggestions and snippet generation for commonly used scenarios. As Ballerina supports messages to be defined as types, many run time errors caused due to mismatch of message structures are avoided at the compiler phase.

Final Wrap up

With my previous experience developing in ESBs based on XML configurations I feel easier to develop with Ballerina for similar scenarios. I tried my best to convey my opinion on Ballerina minimizing my biases on Ballerina. I expect the enterprise integration paradigm based on configuration would be moved to simple programming based models like Ballerina in future.



Sunday, November 12, 2017

Stop Machine Learning and Start Machine Studying

Machine Learning at Present

In the field of AI (Artificial Intelligence) Machine Learning is the key concept used for achieving intelligent systems. In the machine learning, a learning model is developed and it is trained using a sample data set to create the intelligence. The performance of the system is dependent on the elegance of the design and the amount and relatedness of the data set. The model is designed to capture the existing knowledge specialized into the domain which facilitates fast learning and improved accuracy. Specialized knowledge is required to design good learning modals and in today, the focus is completely given on designing better modals to come up with better AI systems. So there are more and more papers written on new modals expecting with better AI performance. But unfortunately, still there are less significant Machine Learning systems designed by small startups other than by a data-rich technology giants like Google or Microsoft.

Limitation of Data Driven Learning

The real advantage with in machine learning for technology giants is the amount of available data. Google have almost all the information the general public knows in their data centers. Facebook has data about us more than we know about ourselves (due to forgetting). An ordinary organization or even a university cannot afford that much of data at all. On the other hand it takes a lot of computational resources (CPU power and time) to train a better machine learning system due to the following reasons.

  1. Scale of data used for training is large
  2. Learning rate slows down in most machine learning systems with amount of learning
It seems an ordinary technology company cannot afford the capabilities of AI to a giant company with such a higher volume of resources. But if we go back to humans, where we are imitating the intelligence to our mechanical systems, we see something different. We learn a lot of stuff by our own even in absence of such a high volume of data or with higher energy consumption.

How do Humans Learn?

When it comes to humans, we have a learning system of neural networks similar to Artificial Neural Networks (ANN). But it has a difference. We learn the reliable information sources (first source is mother then father, relatives, teachers, friends, books, Internet and etc.) first and then get the wisdom directly from these sources. That is also a recursive process. We first identify who we can trust and believe in them. Then we change our believes according to their inputs if the new believes are not largely contradicting with our existing belief system. In that process we gather other reliable sources and get wisdom directly from them as well in the same process. For example we starts to believe mother and then we believe that father is also reliable to believe and starts to believe what the father says. Another example is that we believe school teachers and read their recommended books and believe what the book says about reality. We start to evaluate the validity of a knowledge by evaluating the knowledge itself or the source of knowledge, only when that piece of knowledge is not contradicting with the existing knowledge. For example when a child reads the benefits of capitalism, who was living in a socialistic society, will try to evaluate the reliability of the new knowledge of capitalism versus the existing knowledge of socialism.

In this way humans gather the wisdom gathered by other people for a long term process of learning and studying, by simply believing on the information source. In reality we purely learn a very little by ourselves compared to the amount we learn by studying other information sources. That made it possible us to know about very risky and time consuming experiences like death and aging.

How Machines Can Learn?

Similar to the way we learn by first learning on the reliable information sources, machines can be modeled to identify reliable information sources by conventional machine learning. Then machine itself can refer the information from the source and start to change the behavior according to the information. That is a process of converting the information obtained from the reliable source into meta information of the learning modal. This process can be recursively executed and the system can learn a lot of knowledge within a very little amount of learning. That is pure studying. But how the machines can study like humans?

Read Like Humans

The main source of knowledge of humankind is already stored in form of natural language in books and in online content. Machines can first study what humans have learned up to now in the history by reading the text contents in natural languages.

Source: http://rtechnews.com/tech-science/new-software-makes-use-of-machine-studying-to-personalize-emails-3479

Role of NLP

But the problem is that machines are not capable of reading human languages to learn from books. That is the situation when Natural Language Processing (NLP) comes in to play. Machines can use the existing NLP modals to extract information from as logical information into the system. The remaining work is how the logical information gathered can be converted into the meta information of learning modal and run the system in a controlled scope of logical learning and decision making. Existing modals to evaluate source credibility of information can be re-used to identify the reliable knowledge sources and natural language translation technologies can be further used to enhance the scope of knowledge available to learn throughout the world.

Wednesday, November 8, 2017

Is AI Evil?

There is a heavy debate among technology giants whether the AI can become a threat to the existence of mankind when it becomes an Artificial Super Intelligence. (ASI) But the problem with this prediction is not knowing how the logic of a super intelligence would reason facts. Even humans cannot understand how we do reasoning in most cases. But when it comes to a super intelligence that is 1000000 or more times intelligent than humans how can we predict what would be their decision?

The problem is how a rational thinker would decide whether the humans would exist in this world or not. One argument is that humans are like a virus to the natural world (said by the agent to Morpheus  in movie, Matrix) and they should be eliminated. And then the question is whether the activity of humans cannot be considered as a natural process and tolerate it. Other argument is that the ultimate wisdom is thinking with heart and be friendly with humans. But then the question is why only the kindness should only be considered on humans but not on other living beings in the world. Humans are famous in killing and suppressing the other living beings on earth.

None of the above arguments can rationalize the value of existence of human beings nor it can rationalize the elimination of humans from earth. Then how can we find whether the AI could be evil or not?

First we can assume the AI is a mimicking technology of natural human way of thinking. Let's check whether that evil nature can be expected in humans. In real humans evilness is clearly visible. But it is not possible to become a threat to our existence. One reason is that the scope of power of a human is limited so that he cannot directly use a mass destruction weapon or similar method to kill other people. Others would stop him if that type of behavior is seen from a human being. But the ASI is so intelligent so it can tempt the humans well as it can think many steps ahead the human thinking. But anyway still there is a very small probability of a human being becoming a person with an intention to kill other humans. But why?

Human thinking and value system is programmed according to the genetic algorithm to preserve the genes of themselves. Even the wish of you and me to protect the human beings is a result of that bias. That is not the only bias humans have. Humans and other animals have many common biases like desire to food and sex and fear at destruction. All of them constitutes the basic vision of a human or an animal. We want to survive, protect our species and work for the well being of the human society. Our thinking is driven on the goals on achieving these goals. If AI is developed so that it has the same goals like us it will process information for the well being of humans. That is the simple answer.

As most modern AIs are based on Artificial Neural Networks (ANN), if they are originally developed with a similar neuronal architecture to the real human beings that embeds the evolutionary goals of human beings AIs will start to have a sense similar to humans. But remember, according to us, we and our species are the ones that should survive. If that bias is embedded into ANNs, it will also start to feel the existence of self and will start to protect their species. So before we mimic our neuronal architecture to ANNs we should identify the connections related to self and replace them with humans where the ANN should not have a self but instead humans replacing them. If the self is not replaced correctly with humans, it will correct the replacements we made by itself and become a much selfish personality which ultimately treats humans, like we treat cows and chicken.

Then the question is what if we would not mimic the neural networks of humans. Yes, then there would be no issue like that depending on the goals given to the system. One goal should be always be the goal of protecting the human species, human laws and human traditions. If the goal was something like building chairs (without the goals related to protecting humans) it would use all the possible ways to achieve the target. It will start to kill humans to get their lands to plant trees to get wood. Finally it will destroy all the humans in the world to make most number of chairs. Now you see the challenge. All the actions of AI will dependent on the basic goals of the AI. That is similar to the attitudes of human beings. Parents and adults plant attitudes in a child's mind that are good for the existence of consistence of the society. Actually that is only a part of it. Child's brain is automatically programmed to a certain extend to be aligned with these attitudes. Antisocial criminal children or people are killed by the society which would evolve the humans to maintain only the best attitudes in a society to exist. The same can be applied to ANNs. A set of ANNs themselves can be given a virtual society with agents representing human beings. When their attitudes are against the well beings of humans those ANNs should be eliminated. Running that process would select only the ANNs that has best suited to our human society. That is the time they should be taken outside from the virtual world and be used in the real world. And employing several such ANNs would protect us if one of them goes against us.

Thursday, December 31, 2015

Bye to 2015

After less than 2 hours there will be a new calendar year, 2016 be started which will be a beginning of this blog in a new way with different type of content. Stay tuned for a new type of blogging culture. :)

Sunday, March 2, 2014

ESB Performance Round 7.5

WSO2 has carried out a performance testing of latest ESB release, WSO2 ESB 4.8.1. In WSO2 ESB Performance Round 7.5 it compares WSO2 ESB 4.8.1 with other competitive ESB products, Mule ESB 3.4.0, Talend-SE 5.3.1 and UltraESB 2.0.0.

Basic observations are as follows.



Tuesday, July 16, 2013

How to Build WSO2 Code

Although WSO2 is open source many people were having problem with checking out the WSO2 source code and building WSO2 products. Here are the simple steps to do it.
Note that at the moment ongoing development happens inside the trunk with version 4.2.0 SNAPSHOT. Last released WSO2 code is located in branch version 4.1.0 with Carbon platform version 4.1.2.

Build the trunk


  1. Checkout Orbit from https://svn.wso2.org/repos/wso2/carbon/orbit/trunk/
  2. Checkout Kernel from https://svn.wso2.org/repos/wso2/carbon/kernel/trunk/
  3. Checkout Platform from https://svn.wso2.org/repos/wso2/carbon/platform/trunk/
  4. Install Apache Maven 3 in your computer.
  5. Go to the checked out directories and build with Maven of orbit, kernel and platform code respectively. (Use command, mvn clean install)
  6. If any errors comes with tests use the command mvn clean install -Dmaven.test.skip=true
  7. If the build is properly building and you are fortunate you will get all the products as zip files in each product. For example WSO2 BAM will be there in platform/trunk/products/bam/modules/distribution/target directory.
  8. But most probably you will not be able to build all the products well at the same time and it will take much time as well. So you can build the product only you want as follows. Comment the products module in platform/trunk/pom.xml . Then after building all three orbit, kernel and platform, you can manually build the product/s you want. For example if you only want to build WSO2 BAM, go to platform/trunk/products/bam and build with command mvn clean install -Dmaven.test.skip=true

Build the Branch 4.1.0

  1. Checkout orbit from https://svn.wso2.org/repos/wso2/carbon/orbit/branches/4.1.0/
  2. Checkout kernel from https://svn.wso2.org/repos/wso2/carbon/kernel/branches/4.1.0/
  3. Checkout platform from https://svn.wso2.org/repos/wso2/carbon/platform/branches/4.1.0/
  4. Follow the exact steps 4, 5 and 6 mentioned under the topic, "Build the trunk".
  5. In 7th step, use the directory path, branches/4.1.0/products/bam/2.3.0/modules/distribution/target as the BAM pack location.
  6. Then follow the 8th step by commenting the product module in branches/4.1.0/pom.xml and building the BAM in location, branches/4.1.0/products/bam/2.3.0

Build a Tag

When already there is a released product, best way to build it, is by checking out the tag of the released version. The reason is that even the branch may be committed after the release by a mistake.
For this example lets continue with WSO2 BAM 2.3.0. It has orbit version 4.1.0, kernel version 4.1.0 and platform version 4.1.2. You can checkout these three from https://svn.wso2.org/repos/wso2/carbon/orbit/tags/4.1.0/ , https://svn.wso2.org/repos/wso2/carbon/kernel/tags/4.1.0/ and https://svn.wso2.org/repos/wso2/carbon/platform/tags/4.1.2/ . Then continue building in the same way as earlier.