Latest news

Microsoft PowerApps: how to transform ideas into business apps

We live in a world in which mobility is ubiquitous: there are, at this point in time, apps for whatever action we need to complete at any given moment of our lives. The majority of those apps are built with the consumer in mind and ideally, it is this end user that companies are thinking of when designing and building their digital solution. However, mobile applications are not only useful for the general public. Mobile apps, in a business context, can be critical to managing a company’s daily activities.

Business apps, as they are commonly called, are specifically designed to solve a problem within an organization. In that sense, these apps offer the possibility for companies to manage part of their business in a faster, more efficient and more productive way. However, not all business apps justify the time and investment that is required for a custom development tailored to the needs of each and every company. Since companies have to contend with budget constraints as well as time and human resources limitations, it might seem like a distant dream building an app that can suppress internal needs of efficiency and effectiveness for processes that have the potential to be automated.

Taking advantage of the entire Microsoft ecosystem, which is present in many organizations, it is possible to build an app with Microsoft PowerApps.  From an internal point of view, PowerApps have a considerable impact on business processes – they can be dematerialized and, consequently, it is possible to increase productivity and reduce inefficiencies.

On the other hand, with Microsoft PowerApps, you can enable everyone – even people with no programming or coding skills – within the organization to create and maintain the application.  Additionally, Microsoft Power Automate – a tool that complements Microsoft PowerApps – helps automate processes in different applications and services, whether it is requests’ approval, fulfilment of certain conditions (actions to be followed in case the answer is yes or no), automation of repetitive processes, etc.

In this way, these digital solutions will not depend exclusively on the IT Department and these power users, who already have the business knowledge – and guaranteeing that they are equipped with the necessary knowledge to use these tools – will be able to sustain an effective solution, based on Microsoft infrastructure, without the need of programming knowledge. Moreover, with Office 365, data circulates naturally, rather than generating silos of information which create entropies and inefficiencies. Therefore, your app is always kept up to date with real-time data.

Practical scenario: automating vacations approval

The vacations approval process is a great use case of these tools: you can automate the whole process where the first step is building a vacation request form that the employee submits for approval. Secondly, the manager receives a push notification on his mobile, in which he can immediately approve the request. The process is finished by sending an automatic email to the employee letting him know that his request has been approved. In just a few steps, we are able to automate the process that was previously dependent of the HR department.  With these tools available, we can quickly go from conceptualization to execution and in just a few minutes the employee can start planning his dream vacation instead of despairing while thinking that it will take several weeks for his request to be approved. Like this example, there are many other different use cases for Microsoft PowerApps and Microsoft Power Automate – these tools are a tailored solution, fully adaptable to the singular reality of your company.

Xpand IT experience with PowerApps

Xpand IT has been working with Microsoft PowerApps and Microsoft Power Automate since the first stages of these products’ development. As such, we recognized from the outset that these technologies could complement our own mobile development solutions, whether in cross-platform (Xamarin) or native development. In our point of view, in the context of business apps, PowerApps has an interesting value proposition not only because of the shorter time-to-market but also for allowing business users to materialize their vision in a useful tool – a tool that might justify, in the future, the development of new versions or the recourse to other technologies. For every problem there are many different solutions and we consider Microsoft PowerApps a very interesting tool to boost digital transformation in organizations.

Filipa MorenoMicrosoft PowerApps: how to transform ideas into business apps
read more

Experience Economy: 3 trends for 2020

The concept of ‘experience economy’ is not new, but clear recognition and awareness of it by businesses is now driven by the possibilities offered by the digital age and growing use of so-called edge devices (smartwatches, smart TVs, smart cars, home assistants… ).

Joseph Pine and James Gilmore theorised about us, as modern consumers, and woke us up to the reality that we seek something more from economic transactions; that a purchase is no longer just a simple exchange of money for a product or service. We are ever more demanding, and we increasingly seek memorable experiences. That’s why the authors argue that every sale to a customer should be treated as a standalone event.

But when does a service make the transition and can be considered an experience? When it is highly customised. As Joseph Pine said, “There’s an antidote to commoditisation; and it’s called customisation”. When we design an offer that is completely appropriate and perfectly customised for a particular customer, we move beyond service to a value-added experience – and, in fact, we are willing to pay a much higher price to enjoy this.

One of the typical examples of this ‘fourth dimension of economic supply’ is Starbucks. When we go to Starbucks, we don’t say we’re going to have a cup of coffee – because we’re not. What we really look forward to is the experience; the decor, the aroma, the barista coffee shop, the special Halloween or Christmas editions, meeting up with friends and because we’re always treated as a unique, special customer. We even get called by our name. At Starbucks, a simple commodity – coffee – becomes a true premium experience. And that’s the secret. Look at other similar examples such as Uber, Spotify or Airbnb.

Currently, the possibilities of offering differentiating experiences to consumers are endless, with all the technology that we have available and the amount of data that we can collect, and mobile applications will have to be a strong bet if companies want to really retain their customers. For example, what a consumer with small children wants is a notification that milk or nappies are on offer today, right as they walk through the doors of the supermarket, even if these items were not on their shopping list – experience!

3 trends for 2020

1. Operational data vs. experience data

“Data is the new oil”, right? Right. However, is the data collected today what a brand most needs to be able to give its customers the best experience?

In the experimental economy what really counts is emotion, feelings and values. And data collection is often focused on operational data without taking into account these more human aspects. Therefore, as mentioned in the talk Customer Directions: Five-Star Experience Economy, regarding the IDC Directions 2019, companies should focus on 5 fundamental principles:

  • Personalisation (get to know your customer and talk to them to in order to understand their lifestyle, consumption habits, etc.);
  • Trust (sending the message to customers that the brand will never fail them);
  • Empathy (respecting every customer and their emotions);
  • Delivery; and
  • Engagement (ability to relate to consumers).

With the customer at the heart of companies’ concerns and with systems prepared to gather this kind of insight – with powerful CMS, for example – it will become easier to use what they know about us to our advantage.

2. Personalisation, personalisation, personalisation

The personalisation of interactions must be assured throughout all communications. It is not enough for us to receive different emails from our friends, taking into account our consumption habits: we also want to receive customised newsletters, push notifications at the right time, messages containing personalised discounts… Basically, consumers want brands to anticipate their needs and be able to answer their questions as these needs unfold.

3. Distribution of the omni-channel experience

If I put a product in my basket on a website, I hope the same product will appear on my mobile application. The experience must be common to the various channels of a brand, and this omniscience begins on digital platforms and ends on the devices supporting our daily lives, which are now increasingly intelligent; watches, cars, televisions and even washing machines (75% of the data generated today already comes from edge devices).

Conclusion

We live in a fully digital age, and the way we perceive the world, how we relate, how we consume services or products has changed completely. Brands have to focus more and more on the experience they offer their consumers if they want to capture our attention in the increasingly competitive virtual universe.

Pine and Gilmore predicted that companies would have to focus their business on the experiences they offer their consumers, and that it’s that memory of these experiences that becomes their product – the companies/brands that have the ability to create and offer this angle find their competitive advantage in this model.

Ana LamelasExperience Economy: 3 trends for 2020
read more

Apple Pay in Portugal: What changes in our daily lives

It’s 7.20 AM of a weekday like so many others.

Can you imagine what it is like having to deal with the morning chaos of preparing and eating breakfast, showering, dressing, preparing the lunchbox while trying not to forget putting on socks or at least trying not to put on two different ones. Amidst the chaos, there’s always something you forget. A day like that has everything to go well, right? Wrong.

It is obvious that on such a day, while you’re looking for your card to pay for lunch, you realise that you left the wallet at home.

It only takes two seconds after that to grasp that you just lost your financial independence and you that will have to call a colleague in order to come and save you by paying for your lunch, rather than spending the whole afternoon making up to the restaurant for the meal you enjoyed without paying for it.

A few years ago, it would be plausible to come across such a situation. Nowadays, instead of making that embarrassing call to your colleague, you can just grab your phone and pay. It’s that simple. In less than a minute the payment is complete without any friction. How is it possible, you ask? Well, Apple Pay makes it possible. Thank you, gods of technology!

But what exactly is Apple Pay? What are the implications of this new service for consumers and what does the introduction of this service means for the financial industry?

Apple Pay relies on mature technology. The service was originally launched by Apple in October 2014 in its country of origin – the United States of America – and, throughout these 5 years, Apple has been expanding the service to many other countries. In fact, right now, more than 50 countries support Apple Pay and we can confidently say that Apple plans to continue the evangelization and implementation of this service in more countries.

From June 2019 onwards, Portugal became one of the countries to support Apple Pay. Crédito Agrícola, Revolut, N26, Monese and, more recently, Moey! were the first banks to offer this service in the country and we were one of the first companies to implement the technology.

Xpand IT was one of the partners that accompanied the implementation process of Apple Pay within the Crédito Agrícola mobile app from the outset and had to assure the creation of a simple and intuitive navigation flow, so that adding cards to Apple Wallet through the banking app would be fast, easy and frictionless. Together with Crédito Agrícola, Xpand IT has guaranteed that the implementation passed all security tests and all demanding Apple requisites. Not less important, Apple’s intervention in the implementation process highlighted the necessity of having a partner whose sole focus would be the user experience – to assure the experience was coherent and that it was capable of giving life to this new payment method.

How, then, does Apple Pay work? Getting back to the example above, after the initial setup in which the user is able to add cards in the Apple Wallet app, you can now use just your phone to make contactless payments. Payment terminals are still the same – in case they already accept contactless, the user needs only to open the Wallet app, use TouchID or FaceID for authentication, touch the phone to the POS and voilá, that’s it – transaction completed.

There’s no need for wallets or physical cards that only create obstacles to this process – a process that should be fast and painless. As an additional security measure, if the amount is higher than 20 euros (value determined in Portugal), the user will be asked to confirm the transaction through a PIN.

The truth is that this service is a game changer, as it offers complete convenience to the payment act – you pay for your shopping in any channel, be it online, physical or even in mobile apps by using the devices that are already part of your life (iPhone, iPad, Mac or Apple Watch), without any complications or waste of time. On the other side, this payment method offers more security: not only because your card data is safely stored through the virtual representation of the card (token) – this token can be deactivated or activated at any moment in the app and is independent of the physical cards – but also because in order for the user to authorise a transaction, a biometric authentication is required. This is one of the reasons why this service has been widely accepted in different economy sectors and also why it has been having such a considerable impact in the payment industry.

I know you must be thinking that not all is roses with this service and that might be true. There’s something none of us can escape – we still need to have battery on our phones in order to assure that our cards are accessible and that payments are just a touch away.

I can now leave my wallet at home without any concern. I just have to make sure that my phone doesn’t run out of battery. If that happens, there’s no technology that can save me.

Maybe in a few years’ time we will be able to make a payment just by authenticating our identity without the need of a phone. But that’s something for another blog post!

Filipa MorenoApple Pay in Portugal: What changes in our daily lives
read more

Power Platform World Tour: Our experience

In the last week of August, Xpand IT has travelled once again to London where on the 28th and the 29th took place the first European stop of the 2019 Power Platform World Tour. We departed Lisbon with some expectations we hoped would be fulfilled: we wanted to understand this platform even better – a platform which is experiencing an interesting growth – and also get a glimpse into its future.

For those unfamiliar with Power Platform, this is a platform that brings together 3 Microsoft products that together bring to life a platform that promises to streamline and promote the Digital Transformation of organizations. PowerApps, Flow and PowerBI are tools that enable the digitalization and automation of internal processes and have enormous potential to transform the way companies manage their processes and make their decisions. With these tools, companies will be able to make informed decisions with agility and with technology-based processes, therefore taking advantage of the benefits that come from it.

The Event

Getting back to London, though… the event offered us two full days of interesting content where it was possible to meet the growing and enthusiastic Power Platform community, to explore the challenges that different industries are tackling with PowerApps and, no less important, to get a dose of inspiration from the showcased solutions and how various companies are already taking advantage of these technologies. With The Shard as background, the event was a community get together and a genuine sharing of experiences…in fact, one of Microsoft’s most powerful messages is the Power Platform’s simplicity of use. When they say that everyone can build an app using PowerApps and Flow, it’s true. With these products, both developers and business users have the right tools and are empowered to get better business results by building apps. This is not a tool that can be used to solve every problem. However, it is undoubtedly possible to use these powerful technologies to address some of the challenges companies face nowadays.

One of the highlights of the event was being able to hear first-hand what Microsoft has to say about these products’ evolution and what the future holds, especially with regards to enhancements and new features that will be available to all users from October 1st onward. The AI Builder is an example of the new features we can count on: capabilities such as binary classification, object detection and form processing make it easier to include Microsoft’s cognitive services in enterprise applications and providing them with a layer of intelligence that up until now wasn’t within the reach of PowerApps applications. The platform includes a whole set of new features – more than 400 in the last 6 months according to Microsoft – that will allow more and more citizen developers to emerge.

Another of the highlights of the event was related to how these initiatives should be managed within the company in partnership with the IT department. Even though there’s a lot of advantages in putting the power of app creation in the hands of any user – in fact, these users are even now using Excel or Access to solve many problems – the company needs to guarantee that the theme of Enterprise Management is properly addressed. More importantly, we need to look at these initiatives in a more programmatic way: their adoption will have to be promoted continuously so that they aren’t regarded as one-shot projects only.

We also confirmed our suspicions related to the unprecedented growth of the platform: 700% growth in production apps and more than 2.5M of monthly active developers in the Power Platform. These are surprising numbers that show us that the low-code market is growing: Gartner and Forrester have named Microsoft PowerApps as market leaders. It’s safe to say that the future is looking bright for PowerApps and the rest of the Power Platform.

In Conclusion

In short, you can expect more news about PowerApps very soon. The event was an excellent opportunity to witness how companies are innovating internally and to learn from the many experiences of the community. We have returned to Lisbon with the certainty that the PowerApps value proposition for internal empowerment scenarios is very interesting and, in this sense, can complement our mobile development offer whether in cross-platform (Xamarin) or native development.

Strategically speaking, our vision for customer facing apps doesn’t include low-code tools. However, we see potential in low-code tools when we focus on internal and Employee Empowerment scenarios. More news coming soon!

Filipa MorenoPower Platform World Tour: Our experience
read more

Middleware as code with Ballerina

Let’s assume that we need to facilitate the integration of several systems. What options do we have?

There are quite a few options for performing our integration, such as Enterprise Service Bus (ESB) or other frameworks like Spring and NodeJS.

Existing approaches

Enterprise Service Bus

Let’s start by looking at the ESB, one of the ways of integrating systems. ESB provides a range of services using a standard method of communication such as REST or SOAP.

It also helps monitor all the messages passing through it and ensures they are delivered to the right place by controlling the routing of each message.

You can use code to create the service logic, but service configuration (routing, users and passwords) doesn’t need to be hard coded, because the ESB provides ways to configure services outside the code. It also helps control deployments and versioning of services.

You need to take into account that this approach has a single point of failure and requires high configuration and maintenance.

Some approaches appear to have ESBs in containers, in order to be more ‘cloud-native’, but they still require a lot of configuration, not being very agile.

Spring and NodeJS frameworks

To achieve a more agile approach, we can use frameworks such as NodeJS or Spring to create our integration, however, to work with communication (endpoints, messages and payloads data) they require libraries and plugins and a lot of boilerplate code to deal with the payload type and make simple calls to services.

There is a new language dancing around to try to mitigate these problems between ESBs and frameworks like Spring and NodeJS. Let’s take a look at Ballerina.

Ballerina

Ballerina is a new language that is being created with the communication paradigm in mind. Ballerina allows concurrent work to be done and has transactional functions where the data is either all successfully altered or none of it is.

It has both a graphical syntax as well as a textual one. So, if you write the code with the textual syntax, you are helped by the graphical information generated afterwards. This graphical information can also be used as documentation, because it represents the flow of messages and intervenients in the service. You can check a comparison of the syntaxes in the following image (in the left the textual syntax and in the right the graphical syntax):

Since Ballerina is constructed with communication in mind, you can count on network data types to be fully supported. This way you don’t need to add libraries to manipulate json or xml.

Ballerina is built upon integration patterns, providing a QoS where the communication is resilient, transactional, secure and observable.

To get a better idea of how Ballerina works you can check the following page https://ballerina.io/philosophy/

Service and proxy example

If you want to try the following example in your machine, please follow the guide on how to install Ballerina on your computer under the next link: https://ballerina.io/learn/getting-started/#download-the-ballerina-distribution .

In this example we will check how to create two services. One will work as a resource provider, and the other as a proxy that will convert the resources from xml to json.

Let’s start with the first service and call it service1.

Service 1

To create the service, first we create a file named service1.bal and write the following code:

import ballerina/http;
import ballerina/log;
service service1 on new http:Listener(9090) {
    resource function getCars(http:Caller caller, http:Request req) {
        var page = xml `<response>
        <cars>
            <car>
                <model>Leaf</model>
                <plate-number>AB-12-CD</plate-number>
                <plate-date>2019-01-01</plate-date>
                <serial-number>AS6F4GR8154E5G841DF548R4G1WW</serial-number>
            </car>
            <car>
                <model>Yaris</model>
                <plate-number>AB-13-CD</plate-number>
                <plate-date>2019-04-03</plate-date>
                <serial-number>ESD5GFN2RG5H451SEWFDBGR3544D</serial-number>
            </car>
        </cars>
        </response>`;
        http:Response res = new;
        res.setPayload(page);
        var result = caller->respond(res);
        if (result is error) {
            log:printError("Error sending response", err = result);
        }
    }
}

In this service we have the function getCars with our logic: first we start to create the response as an xml and set it in the response payload. Evetually we respond to the caller, checking if there is an error responding.

To run the service we use the following command:

ballerina run service1.bal

 

This will create the service in port 9090 with the function getCars. We can check the response of the function in the address http://localhost:9090/service1/getCars in the browser or using curl.

Proxy

Now let’s create the proxy!

First we create another file with the name proxy.bal and then we write the following code:

import ballerina/http;
import ballerina/io;
import ballerina/log;
http:Client clientEndpoint = new("http://localhost:9090/service1");
service proxy on new http:Listener(9091) {
    resource function getCars(http:Caller caller, http:Request req) {
        //Get the xml response from getCars
        var responseGetCars = clientEndpoint->get("/getCars");
        
        if (responseGetCars is http:Response) {
            var msg = responseGetCars.getXmlPayload();
            if (msg is xml) {
                var responseJson = msg.toJSON({});
                //Create the response for this service
                http:Response res = new;
                res.setPayload(untaint responseJson);
                
                var result = caller->respond(res);
                if (result is error) {
                    log:printError("Error sending response", err = result);
                }
            else {
                io:println("Invalid payload received:" , msg.reason());
            }
        else {
            io:println("Error when calling the backend: ", responseGetCars.reason());
        }
    }
}

In the code you can see that we’ve created a service named proxy on port 9091 using a function called getCars. We have an endpoint for service1 (no plugins or libraries ), and in the function getCars, the first thing we do is call the function getCars from service1. Afterwards, we check whether the payload from the response of service1 is an xml; and if it is, we convert it to json using just one function! In the end we respond to the caller usng the json from the conversion.

To test the proxy we can use curl or to have a visual of the json, use your browser and the following address: http://localhost:9091/proxy/getCars

You can compare the output from service1 and the proxy and check that the information is the same in both.

Conclusion

In conclusion we can see that Ballerina is shaping quite well to be a promising language the integration of services, with an agile approach and less boilerplate code required for handling communication between services.

But do proceed with caution, because it hasn’t yet achieved a stable release, so syntax and semantics are still subject to change. The first stable release is expected at the end of this year.

Daniel AmadoMiddleware as code with Ballerina
read more

Single-page applications

These days, web applications are taking over old desktop applications, and bringing with them advantages such as decoupling from any device, and convenience of use. The demand of rich, complex and yet user-friendly web applications is growing every day. Along with this demand and also gaining more and more popularity in web development trends are single-page applications.

A single-page application (SPA) is a web application that interacts with the user by dynamically rewriting the current page rather than loading an entire new page from a server. This results in a more comfortable experience for the user, and one that is not continually interrupted with successive page navigations.

Background – traditional multi-page applications

Multi-page applications (MPA) are the ‘traditional’ web applications that reload and render an entire new page as the result of an interaction between the user and the web app. Every user interaction – like clicking a link or changing the URL – and every data exchange from and to the server will make another request for the new page to be rendered. This process takes time and can have a not-so-positive effect on user experience if you’re aiming for an interactive, responsive application.

This default behaviour from MPAs can be worked around by taking advantage of AJAX, which allows refreshing just part of a page. However, we have to be aware of the complexity added to the development process by this solution.

An MPA will most likely use JavaScript (JS) at the front end to add some interactivity to the application, but does not depend on it for the rendering and delivery of the page content. This makes MPA an architecture that is well suited to supporting legacy browsers that usually offer more limited JS functionality.

MPA’s big advantage relies on search engine optimisation (SEO). When a request is made to the server to render a new page, the response is the final content for that page. Search engine crawlers will be able to see exactly what the user sees, so the application will perform well on the search engine. This is one of the big reasons why some major web sites, like Amazon and The New York Times, are still using this architecture.

On the other hand, applications built using this architecture tend to be bigger and slower, constantly loading pages from the server, which affects the user experience negatively. From the development perspective, the process tends to be more complex and will result in a coupled back and front end.

The rise of single-page applications

Like the name says, an SPA has only a single page. All the necessary code to render the application is retrieved in a single page load. After this initial load, no page reload is triggered, there is no new html file being fetched from the server. Instead, the application re-renders parts of the page as a result of any navigation in the browser. All the following communications between the application and the server are aimed at retrieving or posting data from and to the server and occur behind the scenes using well defined APIs from the back-end services.

SPAs rely heavily on JS to be able to listen to events and re-render parts of the page. Everything happens through JS, this kind of architecture is dependent on it and there is no way around it. Because of this, SPAs favour modern browsers that offer vast, more up-to-date JS support.

The behaviour from an SPA makes it a super-fast, responsive application, offering the user an interactive experience resembling that using a mobile or desktop. From the development perspective, we achieve a decoupled back and front end. The back end will no longer be responsible for rendering the view and the communication between the two modules will only comprise of data exchanges. We also simplify the deployment process greatly.

The problem with SPAs resides in the challenge posed by making the application SEO friendly. Given that most of the page content is loaded asynchronously, search engine crawlers have no way of knowing that more data is coming to the page. There is no single standard solution to handle this drawback, but there are some tools that can be used to create an SEO-friendly SPA. It is also probable that in time SPA frameworks will evolve to make it easier for search engines to crawl and index application content.

Are single-page applications the future of the web?

These type of applications have been around for years, but they are only now becoming widespread in the developer world. This is mainly due to the appearance and increasing popularity of web frameworks and libraries that allow developing SPAs out of the box quickly and efficiently, such as Angular and React. If we compare the trend evolution in these terms, we can see that the popularity of SPAs, Angular and React evolved proportionately over time.

SPAs have been getting more and more popular and it looks as if they are not going anywhere in the near future. The technical and functional benefits of SEO-friendly SPAs cannot be ignored and it is expected these type of applications will become available more frequently, especially with the evolution of the technologies involved and hopefully the resolution of some of the SPA pitfalls. However, we need to acknowledge that right now a SPA may not be the correct solution to every project.

Some MPA characteristics make this approach best suited to applications that serve a lot of content in different categories, and where search engine performance is highly important, such as online stores or marketplaces. SPAs are a good fit for dynamic platforms, possibly with a mobile component where a complex interface and a satisfying and reactive user experience are key factors to be considered, such as social networks or closed communities. A third possibility exists for those who like SPAs and their characteristics but cannot fit the application onto a single page: by considering an hybrid application you can make the best of both approaches.

No architecture is super right or super wrong, you just need to know your necessities and choose the best solution for you and your application.

Patrícia PereiraSingle-page applications
read more

Tableau 2019.3 Beta is out; let’s take a quick look!

Tableau is a software that helps people see and understand data, transforming the way it’s used to solve their problems. It makes analysing data fast and easy, beautiful and useful, to ensure that data makes an impact.

This is Tableau’s goal: translate data into value for business with a positive impact.

There’s a new version being launched, Tableau 2019.3 Beta, and installing this version we can see an interesting set of new capabilities. Using the new version, we were able to improve on our goals. Below we’ve highlighted the features we liked the most:

  • Explain Data— A new feature to help you understand the ‘why’ behind unexpected values in your data;
  • Tableau Catalog — A new capability of the Data Management Add-on to ensure you are using the right data in the right way.

Explain Data

Explain Data provides explanations, using Bayesian statistical methods, for unexpected values in data. With this feature is possible to identify causes and see new relationships between data and it’s enabled on all the existing workbooks for Creators and Explorers. No data prep or setup is required.

It’s very simple: select a mark and learn more about it.

The figure below presents a possible example of an explain data from a selected mark.

In this example, we are analysing a visualisation of products and their average profits. We can see that the product Copiers has a profit way higher than the others. With Explain Data, we learn that this happened because in the product records there is a really high value that increases this measure. This feature also displays a few visualisations related to this explanation, such as the first table that shows the record with this higher value.

The panel displayed by this feature presents the following components:

  1. Selected Mark Information – indicates what mark is being described and analysed;
  2. Measure Selection – shows the measures available to select the one in use for explanation;
  3. Expected Range Summary – describes whether the value is unexpected or not given the other marks in the visualisation;
  4. Explanation List – displays a list of the possible explanations for the value in the selected mark. Selection an explanation in the list will display more details in the Explanation Pane on the right;
  5. Explanation Pane – displays the selected explanation using a combination of text and visualisations.

Tableau Catalog

This new feature aims to help organisations manage their data better, because we are facing a time where is very hard for users to find and trust that they’re using the right data in the right way. This feature will be available for Tableau Server and Tableau Online.

With Tableau Catalog it’s easy to get a complete view of all of the data being used in Tableau, and how it’s connected to the analytics. Data owners can automatically track information about the data, including user permissions, usage metrics and lineage, as shown in the figure below.

In this example, the view with this feature is as though we’re looking into a catalog of data on this database. We can see the warnings that appear when there are errors in data quality (a), such as missing fields, and we can see the lineage of the data (b), such as which tables are related, and the workbooks and sheets the data is being used in.

Tableau Catalog also helps to build trust in the data across an organisation, creating a panel with data details (shown in the figure below):

  • Data Quality Warnings is where users can quickly see when there’s an issue with data being used in a dashboard – such as a missing field or maintenance interruption.
  • Definitions and additional metadata can be added in order for users to have a better understanding of the data itself.

These data details are included alongside the dashboard, enabling users and viewers to understand the source and lineage of data from within a visualisation.

In conclusion, with these new features, Tableau aims to:

  • Eliminate duplicate content, time wasting and prevent analysis based on bad data with Tableau Catalog. With the data quality warning, you may be more aware when there is something wrong with the data values and resolve them. One of the biggest changes is being able to see all the data sources that are being used, helping to avoid publishing duplicate data;
  • Provide faster explanations for unexpected values in data with Explain Data. This feature provides more detail about the data, especially outliers, and lets you explore other scenarios that can be further explored/investigated, saving data exploration time, especially when there is a data set with lots of data.

With these new features, Tableau is getting stronger in the market, bringing unique characteristics to bear against its competitors. This is an advantage because nowadays there are many solutions levelling up, and it is necessary to try to make a difference.

For more information and further details on the new features of Tableau 2019.3, click on the following link.

Carina MartinsTableau 2019.3 Beta is out; let’s take a quick look!
read more

A new strategic market: we’ve arrived in Sweden!

Xpand IT is a Portuguese company supported by Portuguese investment, and it is extraordinary how quickly we have expanded within Portugal. At the end of 2018, the company realised a growth of 45% and a revenue of around 15 million euros, which led Xpand IT to be distinguished in the Financial Times’ Ranking of 2019 (FT1000: Europe’s Fastest Growing Companies). Xpand IT was one of just three Portuguese technology companies to be featured in this ranking.

However, Xpand IT always seeks to grow further. We want to share our expertise with all four corners of the world and deliver a little bit of our culture to all our customers. It is true that Xpand IT’s international involvement has been increasing substantially, with 46.5% of our revenue coming from international customers at the end of last year.

This growth has been supported by two main focal points: exploring strategic markets such as Germany and the United Kingdom (where we now have a branch and an office), and strong leverage of the product we register. Xray and Xporter, both associated with Atlassian ecosystems, are used by more than 5 thousand customers in more than 90 countries! And new products are expected this year, in both artificial intelligence (Digital Xperience) and business intelligence.

This year, Xpand IT’s internationalisation strategy is to invest in new strategic markets in Europe: namely the Nordic countries. Sweden will be the first country focused on, but the goal is to expand our initiatives to the rest of them: Norway, Denmark and Finland.

There are already various commercial initiatives in this market, and we can count on support from partners such as Microsoft, Hitachi Vantara and Cloudera, all already well-established in countries like Sweden. Moreover, cultural barriers and different time zones do not represent a significant impact, which make this strategy an attractive investment prospect for 2019.

In the words of Paulo Lopes, CEO & Senior Partner at Xpand IT: “We are extremely proud of the growth the company has experienced in recent years and expect this success to keep on going. Xpand IT has been undergoing its internationalisation process for a few years now. However, we are presently entering a 2nd phase, where we will actively invest in new markets where we know that our technological expertise paired with a unique team and unique culture can definitely make a difference. We believe that Sweden makes the right starting point for investment in the Nordic market. Soon we will be able to give you even more good news about this project!…”

Ana LamelasA new strategic market: we’ve arrived in Sweden!
read more

Zwoox – Simplify your Data Ingestion

Zwoox is a data ingestion tool, developed by Xpand IT, that facilitates data imports and structuring into a Hadoop cluster.

This tool is highly scalable, thanks to its complete integration with Cloudera Enterprise Data Hub, and takes full advantage of several different Hadoop technologies, such as Spark, Hbase and Kafka. Zwoox eliminates the need to encode data pipelines ‘by hand’, regardless of the data source.

One of Zwoox’s biggest advantages is its capability to accelerate data ingestions, offering numerous options for data import and allowing real-time RDBMS DML replications for Hadoop data structures.

Despite the number of different tools that allow data import for Hadoop clusters, only Zwoox is capable of executing the import in an accessible, efficient and highly scalable manner, maintaining data in HDFS (with Hive tables) or Kudu.

Some of the possibilities offered by Zwoox:

  • Automation and partitioning in HDFS;
  • Translation of data types;
  • Full or delta upload;
  • Audit charts (with full history) without impacting on performance;
  • Derivation of new columns with pre-defined functions or “pluggable” code;
  • Operational integration with Cloudera Manager.

This tool is available on Cloudera Solutions Center and will be available soon on Xpand IT’s website. Meanwhile, you can access our informative document. If you’d like to learn more about Zwoox or data ingestion, please contact us.

Ana LamelasZwoox – Simplify your Data Ingestion
read more

Biometric technology for recognition

Nowadays it is more essential than ever to ensure that users feel safe when using a service, a mobile app and when registering on a website. The user’s priority is to know that their data is properly protected. And consequently biometric technology for recognition plays an increasingly crucial role as one of the safest and most efficient ways to authenticate user access to mobile devices, personal email accounts and even online bank accounts.

Biometrics has become one of the fastest, safest and most efficient ways to provide protection to individuals, not only because it is a requirement of authentication for each person as a citizen of a country – considering that fingerprints are some of the data collected and stored for legal purposes and documents – but also because it is the most casual (and reliable) way to protect our cellphones. The advantages of using biometric technology for recognition are efficiency, precision, convenience and scalability.

In IT, biometrics is primarily found connected to identity verification by using a person’s physical or behavioral features – fingerprints, facial recognition, voice recognition and even retina/iris recognition. We are referring to technologies that measure and analyze features of the human body as a way to allow or deny access.

But how does this identification work in the backend? Software that recognises specific points of presented data as starting points. These starting points are then processed and transported to a database which, in turn, uses an algorithm that converts information into a numeric value. It is this value that is compared to a user’s registered biometric entry, the scanner detected and the user’s authentication approved or denied, depending on whether there is a match or not.

The process of recognition can be carried out in two ways: comparing one value to others or comparing one value to another. The process of recognition of one value to others happens when the sample of a user is submitted to a system and compared to samples of other individuals; while the process of authentication of one value to another works with only one user, comparing the provided data to previously submitted data – as with our mobile devices.

There are countless biometric readings, these being some of the most common:

  1. Fingerprinting (one of the most used, economical biometric technologies for recognition, since it has a significant degree of accuracy. In this type of verification, various points of a finger are analysed, such as endings and unique arches). Examples: apps from Médis, MBWay or Revolut;
  2. Facial recognition using a facial image of the user, composed of various identification points on the face, with the ability to define the distance between the eyes and the nose, for example, and the bone structure and lines of each feature of the face. This reading has some percentage of failure, depending on whether the user has a beard or sunglasses. Examples: Apple’s Face ID;
  3. Voice recognition (recognition is carried out from an analysis of the vocal patterns of an individual, adding a combination of physical and behavioral factors). However, it is not of the most reliable method of recognition). Examples: Siri, from Apple, or Alexa, from Amazon;
  4. Retina/iris recognition (being the least used, retina/iris recognition works by storing lines and geometric patterns – in the case of the iris – and with the blood vessels in the eyes – in the case of the retina. Reliability is very high, but so are the costs, which makes this method of recognition less often used). Read this article on identity recognition in the banking industry;
  5. Writing style (behavioural biometrics based on writing style) (lastly, a way to authenticate a user through their writing – for example, a signature – since the pressure on the paper, the speed of the writing and the movements in the air are very difficult to copy. This is one of the oldest authentication tools, used mainly in the banking industry). Read the article on Read API, Microsoft Azure.
Ana LamelasBiometric technology for recognition
read more