Latest news

Machine Learning: autonomous learning

Machine Learning has been developing further every day, thanks to the digital transformation movement. The original basis was a theory that believed computers could learn to perform specific tasks and to recognise patterns. The challenge was simple: to check if computers could learn from data.

Machine Learning provides systems with the possibility to learn and improve from experience, without needing specific programming for that effect. The focus is on developing programs that use available data and can learn on their own. The mathematical models are built and powered with – potentially – large amounts of data. The algorithms learn to identify patterns and to extract insights that are applied when new information is processed. This term dates back to 1959, when the pioneer Arthur Samuel defined Machine Learning as the ability of a computer to learn without being explicitly programmed to do so.

This learning process starts with data processing and trying to identify patterns. The main goal is to allow computers to learn autonomously without the need for human assistance, using that knowledge to make decisions according to what was “learnt”. Even though machine learning algorithms have been around for a long time, the application of these mathematical calculations to Big Data, with more frequency is a recent development. However, according to industry reports, what is considered to be an exponential growth in this area today is going to be seen as only “baby steps” in 50 years. This AI field is expected to grow extremely fast in the coming years.

Examples of Machine Learning

The continuing interest in this practice stems from a few key factors that have also made data mining and Bayesian analysis extremely popular: growth in the volume and variety of available data; cheaper and more powerful computational processes; and low cost storage.

A few examples of machine learning applications in some companies include self-driving vehicles; recommendations from online platforms such as Amazon and Netflix based on users’ behaviour; voice recognition systems such as SIRI and Cortana; PayPal’s platform, which is based on machine learning algorithms to fight fraud by analysing large quantities of data from the customer and assessing risks; the model from Uber that uses algorithms to determine time of arrival and departure locations; SPAM detecting mechanisms in email accounts; facial recognition that occurs in platforms such as Facebook.

Industries that are choosing Machine Learning

Most industries with large amounts of data have already acknowledged the potential of this technology. The possibility to extract insights allows companies to obtain a competitive advantage and work more efficiently.

Financial Services

Banks and other financial entities are using machine learning with two goals: extracting valuable insights from customer data and preventing fraud. Insights identify investment opportunities according to customers’ profiles, and, concerning fraud, the identification of high-risk customers and suspect transactions is improved.

Furthermore, this technology can also influence customer satisfaction. By analysing a user’s activity, smart machines can predict, for example, a possible account closure before it happens and prompt mitigating actions.

Health

Health entities can capitalise on the integration between IoT and data analysis to develop better solutions for patients. The emergence of wearables allows acquiring data related to the patients’ health, which, in turn, allows health professionals to detect relevant patterns including risk patterns. Therefore, this technology offers the potential for better diagnosis and treatment.

Retail

Nowadays, the impact of smart machines in users’ retail experience is quite obvious. The result is a highly personalised service that includes recommendations based on purchase history or online activity; improvements in customer service and delivery systems, where machines decipher the meaning of users’ emails and delivery notes, in order to prioritise tasks and ensure customer satisfaction; and dynamic price management by identifying patterns in price fluctuations and allowing to prices to be determined according to the demand. The ability to gather, analyse and use data to personalise, for example, a purchase experience (or implement a marketing campaign) is the future of retail.

Transportation

Analysing data to identify patterns and trends is key to the transportation industry, since profit growth means more efficient routes and the projection of potential problems. Data analysis and the modelling aspects of machine learning are important tools for delivery companies and public transportation, allowing them to improve their income.

Machine learning apps allow companies to automate the analysis and interpretation of business interactions, extracting valuable insights that make personalising products and services, possible.

Xpand IT has a complete service portfolio in Machine Learning. If you want to know how to use Machine Learning in your business and obtain real added value, we can help. Do you want to know how we can help your business? Contacts us here and get the best out of this technology!

Sílvia RaposoMachine Learning: autonomous learning
read more

Apache Superset Open Source BI: almost the alternative to Tableau

What is Apache Superset?

Superset is a modern BI app with a simple interface, feature-rich when it comes to views, that allows the user to create and share dashboards.

This app is simple and doesn’t require programming, and allows the user to explore, filter and organise data. The best part is… it’s Open Source!

What does Apache Superset provide?

What is truly appealing about Apache Superset is the fact that you can explore each dashboard in a complex way. Superset allows you to focus on each graph/metric and easily filter and organise.

Another attractive feature in this app is the SQL/IDE editor with interactive querying.

Concerning security, Superset allows you to define a list of users and a list of default functionalities (associated with the groups of users) and allows you to view user statistics, providing you total control. You can establish baseline permissions, as well as granting access to certain views or menus. The app also provides an action log.

Visually, Superset has a minimalist and well-organised interface. Even though it is not as easy to use as Tableau, Superset can be an alternative to creating dashboards or people with some knowledge of SQL.

Database support

Superset supports most SQL databases by using Python ORM (SQL Alchemy), which allows you to access MySQL, Postgres, Oracle, MS SQL Server, MariaDB, Sybase, Redshift and others (more information here).

Superset also works with Druid (for example, Airbnb uses Superset with Druid 0.8x), but it does not have all the advanced features available.

SQL-LAB

This feature is definitely a plus. SQL-Lab allows you to select a database, schema and table (previously uploaded) and do an interactive query, preview the data and also save the query history (as shown below).

SQL Lab

A semantic layer allows you to define fields and metrics (for example, ratios or anything expressed by SQL):

SQL Lab

Query history:

Query history
Query history

You also have Python modules available (some available macros), inside SQL, via Jinja.

The least positive side of this is the fact that you cannot add or query multiple tables at the same time. The solution is making a view, which works as a logical layer and abstracts the query from SQL, therefore acting as a virtual table. The only negative aspect of this is that there will always be a query running against another view query, thereby potentially resulting in performance issues.

How to create a dashboard

To create a dashboard, Superset works as follows: there are sources, where you can find databases and tables; slices which are sheets with graphs; and, lastly, dashboards which are composed of groups of slices. Each slice is associated with one or more dashboards, and each dashboard has various associated slices.

Apache Superset dashboard

Views have different types of graphs available such as histograms, box plots, heatmaps or line charts.

Apache Superset dashboard

It is simple to edit graphs: the available features for each view are on the left-hand side, and you just have to change them and press “Run Query”.

Although flexible in most areas, Superset imposes some standardisation, which happens with the colour schemas.

Apache Superset dashboard

Each view allows you to filter views through wildcards.

Apache Superset dashboard

Superset also allows you to share the view, export data to .json and .csv, and see the exact query performed behind each view.

Apache Superset dashboard

Security

Superset integrates with the main authentication backends (database, OpenID, LDAP, OAuth, REMOTE_USE, …).

Concerning privileges, as stated above, this app provides default roles such as Admin (full access), Alpha Gamma, Sql_lab and Public.

It is possible to establish permissions for each user, restricting access to a subset of data sources, menus, views, specific metrics and other criteria. Hence, it is relatively easy to define which type of permission and/or access to data is granted to each person.

People using Superset

According to GitHub, Superset is currently being used by Airbnb, Twitter, GfK Data Lab, Yahoo!, Udemy and others.

It is important to note that “Superset was tested in large environments with hundreds of users. The production environment of Airbnb runs with Kubernetes and more than 600 active users who see more than 100 thousand graphs per day”.

Superset Vs Tableau

Tableau

Superset

  • Able to join between tables within the same or different DBs.
  • Unable to query/join multiple tables. Only possible view by view, which means having multiple queries, thereby affecting performance.
  • Detailed customisation of dashboards, with legends, filters, tags, etc.
  • Limited customisation by type of view (however, creation of CSS templates is available).
  • Easy beginner learning and doesn’t require users to know SQL. Since the platform allows more complex and flexible tasks, there is a second learning curve for users who want to make the best use of Tableau.
  • Easy and smooth learning, but requires SQL knowledge from users.
  • Paid
  • Free and Open Source

Superset’s main advantages

Besides all the advantages already stated, one of the main features of Superset is… it’s Open Source Business Intelligence!

Other advantages:

  • Provides BI without needing code (easy to use for those who are not programmers: you only need to know basic SQL);
  • Easy and quick setup;
  • Provides “SQL-Lab” that allows interactive querying;
  • A semantic layer that broadens the dashboard with ratios and other metrics (based on SQL);
  • Easy and attractive interactive view, that allows data exploration;
  • Satisfies the needs of most companies to allow simple data analysis.

Superset’s disadvantages

  • The app still doesn’t support NoSQL databases;
  • Even though the number of users is growing, it still has little or no support;
  • Sometimes, SQL-Lab freezes in queries for large amounts of data;
  • Has a considerable number of other unsolved issues.a

Susana Santos

Data Scientist, Xpand IT

Susana SantosApache Superset Open Source BI: almost the alternative to Tableau
read more

Best practice for service request management

Service request management

What are service requests?

IT teams receive a wide variety of service requests from their clients, including requests for access to apps, software improvements, computer upgrades and new smartphones. These kinds of requests are known, according to ITIL, as “Service Requests”, and “Request Fulfilment” is the corresponding management process. A lot of service requests are recurring, and in order to obtain maximum efficiency, it is necessary to establish processes and procedures to be followed.

Request fulfilment – what is it?

Request fulfilment is the process followed by the Service Desk team and consists of fulfilling a request from a client. Its mission is to answer the request with the best quality support. In an organisation where a high number of service requests need to be managed, it is recommended that the management is done via a totally independent workflow for logging and handling requests.

What are the four main processes of IT?

  1. Service request management – a formal request from a user for something that needs to be provided.
  2. Incident management – a non-planned disruption of an IT service or a reduction of its quality – for example: “The website is down.”
  3. Problem management – eliminate recurring incidents and minimise incidents that can be prevented – for example: “The reporting app problem is happening again.”
  4. Change management – standardised method to control changes to the IT system in order to minimise its impact on services – for example: “The database upgrade is now complete.”

Prioritisation in service request management

To IT teams in organisations, service requests frequently exceed available capacity in terms of time and resources. IT service teams in big companies constantly answer business requests and, much of the time, they must prioritise: responding first to clients who need the most attention. However, clients complain that it is hard to work with IT because they do not respond, and that it takes too long to complete requests that they need for work. A service request management system makes this process really simple, since it gives people a “self-service” ability, provides them with answers based on suggestions from a knowledge base and streamlines the whole request fulfilment process, therefore delivering an excellent service. In the light of all these issues, there are a few things that IT service teams should prioritise.

Top five prioritisations to deliver an excellent IT service:

  1. Client comes first – service desk teams can often be driven by supply instead of demand. Are you creating a service request catalogue with self-service features because you think it is good, or are you working directly with your clients, meeting their biggest needs? A lot of organisations have created a catalogue portal of service requests that has resulted in a very low usability. Learn from others’ mistakes, and create something based on demand, not on supply.
  2. Focus on “popular” requests – service desk teams can start from a broad and superficial standpoint or from a narrow and deep one. Understand what will serve your organisation and your clients best. It is common practice to start with a sub-group of “popular” services and expand from there, based on usage and feedback. Try not to overload your team in the early stages, and remember that a failed launch will make it more difficult to have clients coming back for a second try.
  3. Integrate knowledge – clients seek answers. Therefore, give them easy access to the knowledge base and redirect tickets to searchable items. Providing a self-service experience that your associates love is the first step to make the whole process easier and to make them ask again.
  4. Centralise the self-service portal – clients are always looking for a single place where they can get help, so even if you develop the most powerful self-service system, it will be useless if it’s not easily found by users. Always try to centralise and increase value when they use the services you offer.
  5. Streamline automation – providing highly functional and knowledge-centric service request management is an excellent first step, but you need to find ways to make your IT team deliver even more value to your clients through a self-service experience. Here, the power to be effective is found in automation. When you incorporate automation into your service desk functionality, you reduce the overall workload of your IT team by taking care of the most common and repetitive tasks.

Service request management process

Even though there are some variations in the way a service request is fulfilled, it is important to focus on how to leverage standardisation and improve the general quality and efficiency of the service. The schema below shows a simple service request fulfilment process, based on the recommendations from ITIL, that can be used as a starting point to adapt existing processes or establish new ones.

gestão de pedidos de serviço

An overview of the service request fulfilment process:

  1. A client requests support through their catalogue of services or via email.
  2. The service desk team analyses the request according to the qualification and approval process.
  3. A team member of the service desk works to fulfil the request, or forwards the request to someone who is able to complete it.
  4. When the request is fulfilled, the service desk completes the ticket. The team member consults the client to ensure that the request was fulfilled as expected.

Eight tips to consider when defining service requirements

  1. Start with the items most frequently requested and choose those that can be fulfilled in the fastest and easiest way. This will allow you to deliver immediate value to your clients and will allow the service desk team to learn as they build further elements of the service requests catalogue.
  2. Record every dimension of service requests (date of request, approval process, fulfilment procedures, fulfilment team, “owners” of the process, SLAs, reporting, etc.) before you add them to the catalogue. This will allow the IT team to manage the requirements of the request better over time. This step is really important to the more complex requests that will evolve in the future.
  3. Collect all necessary data to start the request processes, but do not overload your client with too many questions.
  4. Standardise the approval process as much as you can. For example, every request for a new monitor is considered pre-approved and every request for software requires approval from the client’s superior.
  5. Review the process and procedures of request fulfilment to identify which support teams are responsible for answering and if there are any specific requirements.
  6. Accept that knowledge must be provided in the knowledge base when a request offer is eleased. The main goals of self-service are to give clients what they want faster and to redirect requests as much as possible. This way you will be able to answer questions through a simple FAQ; include this knowledge as part of the plan when you create a service request offer.
  7. Review the Service Level Agreements (SLAs) to ensure that you have the right metrics and notifications properly defined, allowing requests to be fulfilled in a viable period of time.
  8. Accept that reporting is necessary, so you can properly manage the whole lifecycle of a service request and the catalogue, in the long run.

This content is based in an article published in the Atlassian Blog – Best Practices for Service Request Management. Xpand IT is the exclusive partner of Atlassian in Portugal, and was awarded the status of Atlassian Platinum Solution Partner..

Ana PaneiroBest practice for service request management
read more

Pledge 1%: Supporting Associação Crescerbem with Jira Software

Associação Crescerbem is a Private Social Solidarity Institution, founded in 2011, and its main goal is to help families of hospitalised children in economic deprivation. The association started in Dona Estefânia Hospital in Lisbon, where the headquarters still remain nearby. Meanwhile, the support has been extended to Santa Maria Hospital and Beatriz Ângelo Hospital.

The mission of Crescerbem is, specifically, to provide for families in a personalised manner, according to the specific needs of each case and helping them become more independent and autonomous. This way, follow-up does not happen only when a child is hospitalised, but also in the period after the medical discharge.

The association started offering home support and, through that, other needs started to be identified besides medical follow-up; from that point on, countless parallel projects came to life, such as the solidarity pantry (food baskets provided to families), the laundry service, and the solidarity pharmacy (which provides the necessary medication).

Despite the number of families helped and the number of existing cases, all the information was still offline. This means that it was impossible to access the necessary information without physically being in the association’s headquarters. Volunteers spent hours searching for case files and updating them, which resulted in a huge lack of visibility on the current state of each support case. Therefore, computerising all this information was critical. However, there was a problem: the lack of financial means to spend on technological solutions that could put an end to this problem.

It was at this stage that Donate IT – a community of volunteers who work with information technology – came into action through Sofia Neto, community volunteer and Collaboration & Development Solutions Lead from Xpand IT. Sofia managed to unite her work at Donate IT to the movement Pledge 1% – a movement which Xpand IT joined in 2017, thereby committing to annually donate 1% of profit and 1% of products to social solidarity institutions. Completely pro bono, Xpand IT made Jira Software available to Crescerbem (including implementation and support), thereby accomplishing Donate IT’s vision  – helping to help others  – and Pledge 1%’s vision.

Besides being extremely important to the computerisation of all documents of Crescerbem, this project proves that Jira Software has a lot more uses than software management. In practice, it can be adapted to any reality.

Inside the association, each family corresponds to an ‘issue’ in Jira. When support to a new family starts, a new issue is created and the type family is chosen. Each issue has the complete information that characterises each family: information including when the support started, how many children, which country the family is from and the contact details of the parents.

Everything is organised in different tabs in order to make viewing and editing of the information on each family easier, and all initiatives are considered subtasks: e.g. home support or medication provided. Therefore, a new registry of everything that happens has become an integral part of the process, and everyone with access permission can access that information anywhere.

Xpand IT is already working on the new release and it will have two goals: the first one is to introduce Confluence to further facilitate the sharing of information about tasks to be undertaken or families with open cases; the second one is to include Xporter, so that the association can easily export a report, for example, and to always be able to present it with the best formatting.

With this software, we have the ability to computerise all social processes and this means that access to all information related to the families is only a click away. What is the importance of this change? We will have more time. Time to begin expanding to other hospitals; time to help more families; and time to move forward with the idea of a social business, which will make Crescerbem self-sustaining.

Isabel Ramos - Co-founder, Crescerbem
Ana PaneiroPledge 1%: Supporting Associação Crescerbem with Jira Software
read more

DevOps is not Dev & Ops – What I didn’t know about DevOps

All these years I have heard about DevOps, but I was truly convinced it was too techy for me.

I thought it was about continuous integration, automation, and awesome DevOps guys, who knew not only how to develop software but also how to release and manage production environments…

Now, I realise that I was completely wrong… DevOps is not Dev & Ops teams together… but an entire organisation that collaborates – really collaborates.

Of course you need automation; of course you need continuous integration – but that’s not all.

In a DevOps culture you must follow these rules:

  • Know the flow = understand how work goes from “to do” to “done”
  • You don’t work in a silo = instead of working in an isolated team that is just worried about their “own” work, you work for a purpose/value
  • You are constantly learning & improving = Don’t waste time – if something needs to be changed, change it

But how can you transform a whole organisation? Below, you can see some practical tips:

  • Make your work visible to everyone; don’t worry what others may think about it.
  • Change your mindset. Let me tell you a story, that someone once told me:

JFK, once when visiting NASA, saw a janitor cleaning the floor and asked him: What are you doing? He expected an answer like “I am cleaning the floor”, but instead the man said “I’m helping the men get to the Moon.”

  • Add value to your user stories; don’t create them just for someone to do something, but because you need to generate value, like improving customer satisfaction to 80%.
  • Collaborate, collaborate, collaborate even more… No man is an island, so don’t work like one.

Tools are not the most important element, but they can definitely help. Running shoes don’t make you a runner, but they will help you to run better.

If you are searching for tools that can help you understand the flow of work, make your work visible, and help you collaborate better with your team, just take a look at Jira, which allows teams to capture and organise work, assign it to the team and track team activity.

Sofia Neto

Collaboration & Development Solutions Manager, Xpand IT

Sílvia RaposoDevOps is not Dev & Ops – What I didn’t know about DevOps
read more

Xpand IT receives Atlassian Philantropy Partner of the Year 2018

Barcelona, 4 September 2018 – “Moments like these are the ones that remind us that the path of giving back to the community can really make the change for a better world.” These were the words of Pedro Gonçalves, Xpand IT’s Co-Founder, after having received the award for Philanthropy Partner of The Year at the Atlassian Summit 2018.

Xpand IT was the first Portuguese company to join the Pledge 1% movement and by doing so, has committed to donate 1% of product and annual profit to charitable organisations. During the last year, it has been deeply involved with the philanthropic movement, developing a major series of initiatives aimed at giving back to the community.

Atlassian is thrilled to recognize and honor our 2018 Partner Award recipients“, said Martin Musierowicz, Atlassian’s Head of Global Channels. “Solution Partners are instrumental to our customers’ success and we are excited to be able to highlight some of our top partners who are going above and beyond to support customers and provide Atlassian services.

Xpand IT plans to raise the bar in disseminating the Pledge 1% mission in 2019 and multiply the number of initiatives aimed at helping those in need – all the while improving companies’ success through better collaboration using Atlassian technology.

Atlassian is the company behind products such as Jira, Jira Service Desk, Bitbucket and Confluence. Their mission is to help every team  unleash their potential. Xpand IT has achieved highest Solution Partner Level, Platinum, and has an impressive track-record of implementing projects based on Atlassian products.

For Sofia Neto, Collaboration & Development Solutions Lead at Xpand IT, being present at the Summit is not only a unique opportunity to meet people and share know-how and experiences, but is also recognition of the continuous hard work: “We’ve had the opportunity to participate in such exclusive events and the experience goes far beyond a traditional one. This is the second year in a row that we have been distinguished by Atlassian, this time recognising our involvement in the philanthropic movement Pledge 1%, and it’s something that truly makes us proud and just makes us want to do more and more. This is definitely a huge part of what it is to be an Xpander.”

Xpand IT team receives award from Mike Cannon-Brookes, the co-founder of Atlassian, the Philantropy Partner of the year 2018, in Barcelona.
Sílvia RaposoXpand IT receives Atlassian Philantropy Partner of the Year 2018
read more

Applying ‘Product Thinking’ to UX

Life’s too short to build something nobody wants.

Ash Maurya in “Running Lean: Iterate from Plan A to a Plan that Works”

UX/UI in Xpand IT

During recent years, Xpand IT has been investing in supplying UX/UI services, which has resulted in significant growth of our HCI (Human Computer Interaction) portfolio in B2B and B2C application software and mobile apps to both Portuguese and international clients.

Our interfaces and user experience design have always been based on the User-Centred Design concept and have been delivered in almost all industries: retail, banking, telecommunications, insurance, health, transportation, e-commerce, mobility, public utility, and others.

We are prepared for the next challenges concerning user experience in digital products  –  for example, we increasingly propose and design CUI (Conversational User Interfaces). However, we still find traces of a mentality that is not completely receptive to the idea of the digital product, and is more worried about lining up a huge number of functional requirements that sometimes are completely inappropriate to the needs of the final user of the product.

The immediacy experience offered mainly by social media apps is transforming our expectations concerning the way we want to use products and digital services. As UX designers, we feel the need to analyse the complete ecosystem that brands and users share, in order to define how a business can still be relevant in a world where immediacy is king.

User-Centred Design – with its concept of bringing users into the design process – exists to reduce the gaps between those who create a product and those who use it. The UX team from Xpand IT is focused on finding these gaps, preventing them and eliminating them.

Thinking about the product

In its traditional approach, UX/UI design is focused on the functionality of a digital product: the appearance of the interface (UI) and how users interact with it (UX).

However, a group of functionalities is just a small and fragile part of a product: it is just some of the many possible solutions to the problem the product is trying to solve.

It is not that functionality is not important, but it is usually secondary to the reason why people use a product. The reason is simple: the user uses the product to solve a specific problem in the real world.

In practice, this means we have to understand the product first. A particular function may (or may not) be a useful part of a product, but without the product, that functionality may be wasted.

For example, Uber’s app is frequently used as a good example of user experience design: one of the functions that creates the most empathy is the countdown that shows the time until the car is due to arrive, which is certainly convenient and is related to the goal of the app. However, what makes Uber so attractive is the ability to obtain quick and easy transportation in your area at any time. Even if the countdown functionality did not exist, the app would still be useful. In other words, Uber was conceived having in mind the goal of the product and not the resources that came with it.

Applying Product Thinking to the user experience has been experiencing increasing adherence by UX designers worldwide and expanded when well-known international professionals – such as the German Nikkel Blaase – brought it to a wider sphere of public disclosure. By the way, this talk might be a good starting point to learn more about the subject.

Defining the product

All in all, companies tend to assume that the more functions, the more useful the product will be; that the broader the target audience, the more people will use it; that the more use scenarios are mapped out, the more it will be present in people’s lives.

Which is not necessarily true: there are plenty of products out there with loads of functions that are not used by anyone.

A very common mistake is to start immediately by designing any kind of interface.

However, if the user’s problem has not yet been identified, why are functions and interfaces already being thought about?

It is precisely in this aspect that a lot of digital products fail: they try to solve a problem that does not exist.

A few things need to come before the solution that will be found to solve it: deeply understanding the problem, who the user is, and how the product is going to solve this problem.

However, the process of creation of digital products tend to be a little chaotic: inside a company, there are different departments, areas and businesses that have different opinions about what the product must be, for whom it must be designed and, mainly, which functionalities it must have.

That is why a thorough reflection to clearly define the scope and requirements of a digital product should be done:

  • Why are we investing in this product? What is the business deal in creating this product? Which data and statistics prove that the product is viable?
  • What is the product? What is its primary function? How does it stand out from the competition?
  • For whom is the product being created? What is the profile of the typical user? Which specific behaviours or needs of this user should be considered?
  • Where and when will the product be used? At what time and how much? At home, in traffic, at work? Is it a product for constant use or a one-time use product?
  • How do we want people to use the product? What do we want people to feel when using it? What problems do we want to solve?

Finding the correct answers to these questions constitutes the basic strategy to define a product and causes alignment between the various business areas of companies. This process, when well driven and supported by UX designers, brings huge advantages:

  • Build the right functions and interfaces for the user.
  • Understand the experience as a whole and not just a visual and interaction layer the user will see.
  • Ensure that the product solves real problems for its target users.
  • Minimize the risk of building something nobody would want to use, or that does not last a long time.

Conclusion

When this kind of thought about the product is part of the process from the beginning, UX designers can ask the right questions, communicate more efficiently and suggest appropriate functionalities.

It is easy to be overwhelmed with infinite functionality possibilities and ignore some of the important parts of the design process.

Avoid the potential traps when focusing only on functionality instead of on product usability, thereby turning the thinking about the product into part of the UX design process of the mobile or web app. There is nothing wrong with functionality, but it should not be more important than the real goal of the product.

Have this in mind, and the final result will be a digital product that is created, tested and personalised for the defined target audience, with greater probabilities of becoming essential and making users’ lives easier.

Carlos Neves

Senior Ux & UI Consultant, Xpand IT

Carlos NevesApplying ‘Product Thinking’ to UX
read more

Kotlin and a brighter future

Picture this odd world called The JVM.

(From here on, it’s better to read it with David Attenborough’s voice…)

At first there was only one living creature on it: its name was Java. Java was fun; Java was new; Java was simple; Java was powerful – yet Java wasn’t enough. Other species started to come to life, such as: Ceylon, Groovy, Scala, Clojure, Frege, Kotlin. Even Java itself evolved over time. “But why so many species?” is a question one might ask. Had the mighty creator failed with its own creation? Were the other Gods (the programmers) not pleased? Maybe they just weren’t happy enough…who knows!

Now, this is the case for Kotlin.

Enough with the “Life on Earth” analogy, but you can keep the David Attenborough voice – we all know how incredible it is.

Yes, the question is, indeed, why yet another programming language for the JVM? I can’t answer that, I really don’t know, but I believe I have some arguments that might help shed light on why this is a strong specimen.

Kotlin’s evolution

Although it seems like it is, Kotlin is not completely new. Its first commit dates back to 8 November 2010 and it has been slowly maturing over time, just like a fine whiskey.

General adoption has been increasing in recent months helped by major events such as Spring adoption in version 5 or Google I/O 2017, where Google announced official support for the Android platform. 2017 was also the year of the first KotlinConf, where it became noticeable how much Kotlin is reaching beyond the Android World.

It is also curious to see the number of StackOverflow questions related to Kotlin increasing after the Google I/O announcement and the fact that it now appears on the TIOBE Index.

As for public open source projects on GitHub, according to GitHut 2.0, for 2017/Q4 in the top 50 list, Kotlin scores:

  • 16th place for projects stars for a total of 0.480%
  • 17 th place for issues, totalling 0.489%
  • 23 rd place on pull requests making a total of 0.351%
  • 19 th place for pushes, totalling 0.424%

…with overall positive growth.

Java interoperability

Kotlin stands out amongst many, mainly because of its ability to interop easily with Java on the JVM runtime.

It compiles to Java Bytecode and adds practically zero overhead, it being possible to compile to versions 6, 7, 8 and 9 of Java bytecode and at the same time take advantage of the features of each version. One great example of Kotlin’s power is its support for Lambda Expressions even when compiling to Java Bytecode 6. Even the Java language doesn’t provide support for them.

Tooling

Kotlin is developed by JetBrains, making it a first-class citizen within IntelliJ IDE, Android Studio and CLion. Features such as refactoring, code generation, code completion, compiling to bytecode and sneak-peeking the bytecode on the fly – and even a Java to Kotlin converter – are all there for your delight.

The command-line compiler (kotlinc) has a REPL mode that can be used to easily try out Kotlin snippets of the fly.

Mutiplatform targets

Compilation for multiple platforms is supported. Using Kotlin you can target the JVM, Javascript and LLVM.

Let me try to rephrase that to get the point across:

“Program with Kotlin and get the server-side code deployed to JBoss; share business logic code with the Android client application and also the JS/HTML web client. Oh, let’s not forget that cute IoT device – share some code with it as well. What’s that? You want iOS? Sure, why not!”

Kotlin/Native leverages LLVM to ease natively targeting a vast number of platforms like Windows, Linux, MacOS, iOS, Android and WebAssembly.

Kotlin for Javascript enables you to transpile Kotlin into Javascript, making it possible to, for example, create React web apps.

A great showcase of this feature is the KotlinConf 2017 companion application, which is entirely implemented using Kotlin, covering backend, frontend and mobile applications.

Embracing Kotlin as a team

Starting to use Kotlin doesn’t have to be a risky decision! By nature, it works incredibly well with existing Java code and it’s even possible to have both Kotlin and Java code on the same project. Developing new features in Kotlin on an existing project can make it a good approach for adoption with low risk as there’s no need to scrap everything and start from the ground up with a new programming language/technology.

Using Kotlin has also become a day-to-day reality for Android developers. Most Android GDEs (Google Developer Experts) advocate and use Kotlin on a daily basis and it can even be found in Android’s Official documentation.

As for non-mobile developers, such as backend developers, Kotlin presents itself not only as a way to achieve current programming language standards on the JVM while targeting Java 6/7, but also as a replacement for Java as a whole. Projects such as TornadoFX, Ktor and Spring, among many others, provide the necessary foundations for Kotlin to thrive outside the realms of mobile. And since Kotlin is almost 100% interoperable with Java, you can keep using the same frameworks you’re used to but with the syntactic happiness of Kotlin!

The most common approaches to adopting Kotlin are:

  • Re-write Java unit tests with Kotlin. Some advocate this as a good approach to dip your toes into Kotlin while not putting your production code at risk. However, from my perspective, tests should be using production code. However, Kotlin does offer some neat tricks to make unit tests more legible and compact.
  • When starting new features, instead of using Java, use Kotlin!
  • If the project you’re working is due some refactoring, apply Kotlin into the new refactored version of the code base.
  • Go full Kotlin and leave the sad days behind by saying sayonara to Java!

Kotlin language

The language itself is beautiful. Kotlin is a statically typed language with conciseness and readability as major goals.

It also attempts to solve one of the biggest problems of the Java world: NullPointerException. It does so by making nullability part of the type system and enforcing the developers to think about it explicitly.

Another aspect that makes it a top tier language is the fact that the authors attempt to bring the best features available in other languages into Kotlin itself. To name a few:

  • Extension Functions
  • Destructuring declarations
  • High Order Functions
  • Type inference
  • Functional Facilities (map, filter, flatmap, etc.)
  • Custom DSLs
  • Named parameters
  • Pattern Matching
  • Operator overload

Kotlin syntax

Here are some Kotlin snippets as a sneak peek. These attempt to showcase the simplicity and conciseness of the Kotlin language in comparison to Java.

Java Kotlin

// Class definition

class Person {

private String name;

private int age;

public String getName(){

return name;
}

public int getAge(){
return this.age;
}

public Person(String name, int age) {
this.name = name;
this.age = age;
}

public String greet(){
return “Hi ” + this.name + “, how are you doing?”;
}
}

// Class definition

class Person(val name:String, val age:Int) {

public fun greet():String {

return “Hi $name, how are you doing?”

}

}

// A Typical unsafe Java Singleton

public class Singleton {

private static Singleton instance = null;

private String someProperty = “SOME_SINGLETON”;

private Singleton() {

}

public static Singleton getInstance() {

if (instance == null) {

instance = new Singleton();

}

return instance;

}

public String getSomeProperty() {

return someProperty;

}

public String appendSomethingToProperty(String something) {

return someProperty + something;

}

}

// A simple Kotlin Singleton

object Singleton {

val someProperty:String = “SOME_SINGLETON”

fun appendSomethingToProperty(something:String) = someProperty + something

}

What’s next?

If I’ve sold you on Kotlin and you want to try it out, here are some pointers to help you:

And keep an eye out for future Kotlin articles on this blog!

João Gonçalves

Mobile Consultant, Xpand IT

João GonçalvesKotlin and a brighter future
read more

My day at Google Cloud OnBoard

On the 12th of March 2018, I had the pleasure of participating in the Cloud OnBoard event, which is a free and introductory training for Google Cloud Platform (GCP), in London. The event was created for IT Managers, System Engineers and Operations professionals, Developers, Solution Architects and business leaders searching for Cloud solutions.

Why Google Cloud?

There are a few reasons to consider Google:

  • Google has been contributing significantly to the open source world throughout the years, namely:
    • Google Container Engine, Kubernetes, which is now open source.
    • TensorFlow, an open source framework for machine learning.
    • Apache Beam, which works as a model of portable, unifying and extensible ETL, that allows batch processing and streaming.
    • Some strong contributions on Node.js.
    • More than 2000 contributions in open source projects.
  • Currently, most of my tasks as a consultant are undertaken in a well-known bank, and it is public knowledge that this institution is linked to Google.
  • Other big companies and start-ups use GCP, such as Snapchat, Revolut, Philips, Ocado, BNP, etc.

During this training, I discovered a few more reasons to consider Google:

  • Google cares about the environment:
    • It has the first data centres certified to ISO 14001
    • It is 100% carbon neutral and has used renewable energy since 2017.
  • Payments are processed by the second, not by the minute.
  • Huge discounts exist for bigger commitments or 24/7 use of Google.
  • You can customise machines if the standard ones do not meet your needs.
  • It is possible to escalate up and down right to zero (this is automatically managed, based on requests or the amount of data).
  • Multiple safety layers from HTTPS to storage (not even Google can access data: the client has a master key through their password).
  • Google also creates hardware. As of right now, it has TPUs (supposedly, more hardware development is expected in the roadmap).
  • The long-term goal is to change the virtualization to a No-Ops (no server)

About the Cloud OnBoard event: The event discusses the main technologies available in GCP, from development to analysis.

Computing

App Engine

I remember that when GCP was launched, App Engine was one of the main products, even though most people did not really understand the concept. Essentially, I see it as a PaaS to develop apps in which Node.js, Java, Ruby, C#, Python and PHP are supported by the product from scratch. Some interesting functionalities are:

  • It is possible to get the app and deploy on-premise, for example, by using a Docker container or Kubernetes.
  • A control version of the app is available. For example, it is possible to use load balancing with a version for requests from Europe and another version for requests from the USA.
  • GCP automatically manages priorities based on requests (which may be defined by region). This functionality was shown in the training, by using Apache Benchmark. However, in my opinion, response was a little slow, maybe because it was such a small panel. Nevertheless, it proved its “scalability” both up and down to zero in a multi-region context, by using load balancing to alternate between versions of the app.
  • It is easy to integrate an app developed in App Engine with other GCP products, such as databases, dataflows, monitoring, etc.
  • In addition, Google provides a mobile app that allows you to manage GCP if, at some point, you cannot access your computer.

Computer Engine

This is part of what the GCP offers on virtual machines and is similar to Amazon EC2.

In this event, it was revealed that the acquisition of Al start-up DeepMind in 2014, for $500m, helped reduce the operation costs of Google Data Centres, to the point where they broke even.

Kubernetes Engine

Previously known as Google Container Engine, the name changed to Kubernetes when Google decided to make it open source. For those familiar with Docker, it is a well-known engine that allows deployment, management and scalability. Since it is a Google product, GCP is probably the best place to use Kubernetes on the Cloud.

Networking

Load Balancing

This is a normal functionality for any Cloud provider. The only reason I listed it here is because this product is used by Google services (Google Search, Gmail, Google Maps, Youtube…). If your website or app has times where it receives large numbers of requests, GCP does not need prior notice to be able to escalate automatically. Obviously, this also depends on the internally chosen architecture for the app.

Storage and Databases

Cloud Storage

It has a storage model similar to Google Drive, Dropbox, Box, but it is easier to integrate with other GCP apps. In this product, an item (PDF, DOC, MP3…) is unchangeable and easily scalable. This offer is very similar to Amazon’s S3.

Cloud SQL

Basically, this is MySQL prepared by Google to be scalable and with high performance. At the time of the event, PostgreSQL was only available in a beta version, but now it is widely available.

Cloud Spanner

Essentially, this is a database that escalates horizontally and is strongly consistent with relatable data (it would be interesting if Cloud Spanner were open source, but it still would not be guaranteed to have access to its full content, since it is necessary to have an exquisite infrastructure and to use the True Time API – see below). It is probably the dream database to support:

  • Transactions (globally consistent)
  • Automatic replications
  • SQL (Standard ANSI 2011 with add-ons)
  • Scalable
  • High availability.

This database puts into perspective one of the most well-known theorems in computer science: Brewer’s theorem or the CAP theorem. This theorem says that any distribution system is not free from network failures, therefore it is theoretically impossible to ensure, simultaneously, three requirements in a database: consistency, availability and partition tolerance.

How did Google manage something theoretically impossible? Google data centres use a special API, called TrueTime, which is Google’s global synchronization watch. However, a big part comes from the exquisite infrastructure Google has. For those who would like to know more, you can read the papers here and here.

Big Table

A widely known NoSQL database, since it is used by Google Services such as Google Search, Gmail, Google Maps, Google Analytics… Even though the NoSQL database is not open source, there is a very similar one, HBase.

Big Data

DataFlow

A service used to transform and enrich data in stream and batch modes that works as an ETL tool. The main functionality of DataFlow is Apache Beam support, which gives the ability to develop pipelines (by using SDKs in Java and/or Python) on-premise and easily change to GCP with DataFlow. 

Dataproc

Google’s take on Apache Spark and Apache Hadoop and provides other tools from the Hadoop ecosystem, such as Hive. GCP uses open source code as its base, but performs a few alterations to be able to connect to some of its other products, such as Cloud Storage.

Pub/Sub

The GCP service for streaming focused on events. Its equivalent would be Apache Kafka, if you are searching for an on-premise solution.

Artificial Intelligence

TensorFlow

GCP has several machine learning services, and the majority use an open source framework, TensorFlow.

Similarly to Kubenetes and Apache Beam, you can use TensorFlow on-premise and, when you feel comfortable, you can migrate to a CGP. Furthermore, it is possible to use GCP’s APIs, which already have models trained for detecting items in an image, translating text, speech, extracting video metadata, etc.

Lastly, there follows a comparison of Services in GCP, AWS and Azure

Google Cloud Platform Amazon Web Services Microsoft Azure
Google Compute Engine Amazon EC2 Azure Virtual Machines
Google App Engine AWS Elastic Beanstalk Azure Cloud Services
Google Kubernetes Engine   . Amazon EC2 Container Service   . Azure Container Service
Google Cloud Bigtable Amazon DynamoDB Azure Cosmos DB
Google BigQuery Amazon Redshift Azure SQL Data Warehouse
Google Cloud Functions AWS Lambda Azure Functions
Google Cloud Datastore Amazon DynamoDB Cosmos DB
Google Storage Amazon S3 Azure Blob Storage
Google Cloud Dataflow AWS Glue / Kinesis / EMR Azure Data Factory / Stream Analytics Data Lake Analytics
Google Cloud Dataproc Amazon EMR Azure HDInsight

Joel Latino

Senior BI & Big Data Consultant, Xpand IT

Joel LatinoMy day at Google Cloud OnBoard
read more