Latest news

A new strategic market: we’ve arrived in Sweden!

Xpand IT is a Portuguese company supported by Portuguese investment, and it is extraordinary how quickly we have expanded within Portugal. At the end of 2018, the company realised a growth of 45% and a revenue of around 15 million euros, which led Xpand IT to be distinguished in the Financial Times’ Ranking of 2019 (FT1000: Europe’s Fastest Growing Companies). Xpand IT was one of just three Portuguese technology companies to be featured in this ranking.

However, Xpand IT always seeks to grow further. We want to share our expertise with all four corners of the world and deliver a little bit of our culture to all our customers. It is true that Xpand IT’s international involvement has been increasing substantially, with 46.5% of our revenue coming from international customers at the end of last year.

This growth has been supported by two main focal points: exploring strategic markets such as Germany and the United Kingdom (where we now have a branch and an office), and strong leverage of the product we register. Xray and Xporter, both associated with Atlassian ecosystems, are used by more than 5 thousand customers in more than 90 countries! And new products are expected this year, in both artificial intelligence (Digital Xperience) and business intelligence.

This year, Xpand IT’s internationalisation strategy is to invest in new strategic markets in Europe: namely the Nordic countries. Sweden will be the first country focused on, but the goal is to expand our initiatives to the rest of them: Norway, Denmark and Finland.

There are already various commercial initiatives in this market, and we can count on support from partners such as Microsoft, Hitachi Vantara and Cloudera, all already well-established in countries like Sweden. Moreover, cultural barriers and different time zones do not represent a significant impact, which make this strategy an attractive investment prospect for 2019.

In the words of Paulo Lopes, CEO & Senior Partner at Xpand IT: “We are extremely proud of the growth the company has experienced in recent years and expect this success to keep on going. Xpand IT has been undergoing its internationalisation process for a few years now. However, we are presently entering a 2nd phase, where we will actively invest in new markets where we know that our technological expertise paired with a unique team and unique culture can definitely make a difference. We believe that Sweden makes the right starting point for investment in the Nordic market. Soon we will be able to give you even more good news about this project!…”

Ana LamelasA new strategic market: we’ve arrived in Sweden!
read more

Zwoox – Simplify your Data Ingestion

Zwoox is a data ingestion tool, developed by Xpand IT, that facilitates data imports and structuring into a Hadoop cluster.

This tool is highly scalable, thanks to its complete integration with Cloudera Enterprise Data Hub, and takes full advantage of several different Hadoop technologies, such as Spark, Hbase and Kafka. Zwoox eliminates the need to encode data pipelines ‘by hand’, regardless of the data source.

One of Zwoox’s biggest advantages is its capability to accelerate data ingestions, offering numerous options for data import and allowing real-time RDBMS DML replications for Hadoop data structures.

Despite the number of different tools that allow data import for Hadoop clusters, only Zwoox is capable of executing the import in an accessible, efficient and highly scalable manner, maintaining data in HDFS (with Hive tables) or Kudu.

Some of the possibilities offered by Zwoox:

  • Automation and partitioning in HDFS;
  • Translation of data types;
  • Full or delta upload;
  • Audit charts (with full history) without impacting on performance;
  • Derivation of new columns with pre-defined functions or “pluggable” code;
  • Operational integration with Cloudera Manager.

This tool is available on Cloudera Solutions Center and will be available soon on Xpand IT’s website. Meanwhile, you can access our informative document. If you’d like to learn more about Zwoox or data ingestion, please contact us.

Ana LamelasZwoox – Simplify your Data Ingestion
read more

Biometric technology for recognition

Nowadays it is more essential than ever to ensure that users feel safe when using a service, a mobile app and when registering on a website. The user’s priority is to know that their data is properly protected. And consequently biometric technology for recognition plays an increasingly crucial role as one of the safest and most efficient ways to authenticate user access to mobile devices, personal email accounts and even online bank accounts.

Biometrics has become one of the fastest, safest and most efficient ways to provide protection to individuals, not only because it is a requirement of authentication for each person as a citizen of a country – considering that fingerprints are some of the data collected and stored for legal purposes and documents – but also because it is the most casual (and reliable) way to protect our cellphones. The advantages of using biometric technology for recognition are efficiency, precision, convenience and scalability.

In IT, biometrics is primarily found connected to identity verification by using a person’s physical or behavioral features – fingerprints, facial recognition, voice recognition and even retina/iris recognition. We are referring to technologies that measure and analyze features of the human body as a way to allow or deny access.

But how does this identification work in the backend? Software that recognises specific points of presented data as starting points. These starting points are then processed and transported to a database which, in turn, uses an algorithm that converts information into a numeric value. It is this value that is compared to a user’s registered biometric entry, the scanner detected and the user’s authentication approved or denied, depending on whether there is a match or not.

The process of recognition can be carried out in two ways: comparing one value to others or comparing one value to another. The process of recognition of one value to others happens when the sample of a user is submitted to a system and compared to samples of other individuals; while the process of authentication of one value to another works with only one user, comparing the provided data to previously submitted data – as with our mobile devices.

There are countless biometric readings, these being some of the most common:

  1. Fingerprinting (one of the most used, economical biometric technologies for recognition, since it has a significant degree of accuracy. In this type of verification, various points of a finger are analysed, such as endings and unique arches). Examples: apps from Médis, MBWay or Revolut;
  2. Facial recognition using a facial image of the user, composed of various identification points on the face, with the ability to define the distance between the eyes and the nose, for example, and the bone structure and lines of each feature of the face. This reading has some percentage of failure, depending on whether the user has a beard or sunglasses. Examples: Apple’s Face ID;
  3. Voice recognition (recognition is carried out from an analysis of the vocal patterns of an individual, adding a combination of physical and behavioral factors). However, it is not of the most reliable method of recognition). Examples: Siri, from Apple, or Alexa, from Amazon;
  4. Retina/iris recognition (being the least used, retina/iris recognition works by storing lines and geometric patterns – in the case of the iris – and with the blood vessels in the eyes – in the case of the retina. Reliability is very high, but so are the costs, which makes this method of recognition less often used). Read this article on identity recognition in the banking industry;
  5. Writing style (behavioural biometrics based on writing style) (lastly, a way to authenticate a user through their writing – for example, a signature – since the pressure on the paper, the speed of the writing and the movements in the air are very difficult to copy. This is one of the oldest authentication tools, used mainly in the banking industry). Read the article on Read API, Microsoft Azure.
Ana LamelasBiometric technology for recognition
read more

Using Salesforce with Pentaho Data Integration

Pentaho Data Integration is the tool of the trade to move data between systems, and it doesn’t have to be just a business intelligence process. We can actually use it as an agile tool for point-to-point integration between systems. PDI has its own Salesforce input step which makes it a good candidate for integration.

What is Salesforce?

Salesforce is a cloud solution for customer relationship management (CRM). As a next generation multi-tenant Platform as a Service (PaaS), its unique infrastructure enables you to focus your efforts where they are most essential: creating microservices that can be leveraged in innovative applications and really speeding up the CRM development process.

Salesforce is the right platform to give you a complete 360º vision of your customer and his interactions with your brand, whether this happens via your email campaigns, call centres, social networks, or a simple phone call. Marketing automation is, for example, just one of the many great things Salesforce brings to you in an all-in-one platform.

How do we use PDI to connect to Salesforce?

For this access we need all our Salesforce connection details: the username, password and the SOAP web service URL. PDI has to be compatible with the SOAP API version that you use. For example:

  PDI version   SOAP API version number
  2.0   1.0
  3.8   20.0
  4.2   21.0
  6.0   24.0
  7.0   37.0
  8.2   40.0

 

Nevertheless, even if Salesforce gives us a new version of the API we can still use the old API perfectly well. Just be careful, because if you’ve created new modules inside the platform, the new API won’t have these customisations, and so you’ll need to use the Salesforce Object Query Language (SOQL) to get the data. But don’t worry, we’ll explain it all in the next section.

How do we use PDI to connect to Salesforce?

The SOQL syntax is quite similar to SQL syntax, but with a few differences:

  1. The SOQL does not recognise any special characters (such as * or ; ) and so we have to use all the fields that we will get from Salesforce, and we cannot add the ; at EOF.
  2. We cannot use comments in a query; SOQL does not recognise this either.
  3. To create joins we need to know a few things:
    • For the native modules that we need linkage to (direct relationship), we need to add in final name a ‘s’. For example:

Get all Orders with and without has Products (OrderItem Module)

  • For the customisation modules that we need to get data from another module (direct relationship) we need add to final name the  ‘__r’ . For example:
    Filter  OrderItems by Product_Skins__c field inside Product 2 Module 

How do we extract data from Salesforce with PDI?

We can use the Salesforce input step inside PDI to get data from Salesforce using SOQL; just be aware you can only use up to 20,000 characters to create the query.

  • Connection parameters specified:
    • Salesforce web service URL:

<url of Salesforce Platform>/services/Soap/u/<number of API Soap updated>

  • Username: Username Access to the Platform  (i.e. myname@pentaho.com)
  • Password:Password + Token (the company provides the token for us add to the password in Kettle.Properties) i.e: PASSWORDTOKEN

Settings parameters specified:

    • Specify query: Without active (like we can see in the image below) we only need to choose the module (the table containing records that we need to access).

For the next tab (Content) we have the following parameters options:

  • If we want to get all records from Salesforce (I mean, if we want to get delete records and insert records) you need place a tick in Query All Records, and choose from the parameters below one of the following options:
    • All (get new/update records and delete records), Update (get only inserts and update records) ;
    • If you untick the tick from Query All Records parameters, we only get insert/update registers;
    • Delete (we get only delete records).

How does PDI know if records are new/updates or deletes?

The Salesforce has native fields very useful for controlling the process. However we cannot see these fields in layout or on builder schema in SF. We only can see the data associated with these specific fields if we’re using the SOQL or PDI to access these fields.

  • CreatedById and CreateDate are fields that shows the user and data time when records were created.
  • The LastModifiedDate and LastModifiedID shows the data time and the user who modified the record. We can use these fields to get data updated in SF.
  • Id (Salesforce Id) present in URL as a string of 18 characters (Java config.) displays the register.
    For example:
  • We have more one native field IsDeleted with data type = Boolean that shows if the record was removed (IsDelete = true) or not (IsDelete = false).

In Additional field option we have three options:

  • Time out is useful in asynchrony systems because we can configure the timeout interval in milliseconds before the step times out;
  • Use Compression is useful to get more performance from the process. Because when you tick it, the system will direct all calls to the API and send all in .qzip field;
  • Limit is for configuring the maximum number of records to retrieve from the query.

Inside the last tab, we can see all fields from the query inside the first tab. Without SOQL we get all the module fields. With SOQL we get all the fields inside on SELECT function.

And for these cases, we need to do the manually changes.
For more details:

The base64 displays images or PDFs present in SF.

If we need send images (.jpeg) or pdf (.pdf) directly to SF we load these type of fields  using JAVA to convert binary files to the base64.

For example, to send a PDF file to SF:

How to load data to Salesforce with PDI?

Send data to Salesforce from other databases or from Salesforce.

The connection option is equal as described in Salesforce Input.
In Settings Options we have new parameters:

  • Rollback all Changes on error – if we got any error nothing will integrate into SF;
  • Batch Size – we can bring a static number of the records and integrate them simultaneously (the same batch) to SF;
  • In Output Fields Label we need to add the field name that we want to get the Salesforce ID for each record integrated.

In the Fields Option, we need to put field mapping.

  • For Module Field, we need to put the API Name field in SF to get the new data;
  • In the Steam Field, we need to put the name of the field that will be integrated into the respective field in SF;
  • Use External id = N to all field updated inside the respective Module;
  • Use External id = Y to all records that we need updating but are not present in the current module, but present in another module.

Delete records inside Salesforce

We delete records from Salesforce with Delete Salesforce step. We need to specify the key field from Table Input that does the reference to the key in Salesforce (Saleforce Id).

Update Salesforce records

If we only want to update records in SF we need to use the Salesforce Update Step.
Inside Fields (Key included) Option we need to add the key to records for the specific module.

Upsert data to Salesforce

If we want to insert and update in the same Batch to SF, we need to use Salesforce Upsert.
The parameter Upsert Comparison Field helps match the data in SF.

Fátima MirandaUsing Salesforce with Pentaho Data Integration
read more

Meetup Data Science Hands-on by Lisbon Kaggle: hot topics

Data Science Hands-on: “Predicting movies’ worldwide revenue”

On May 4th, a day known worldwide as Star Wars Day (“May the fourth“), approximately 40 Data Science fans seized this occasion to learn more about this subject by practicing and sharing on yet another Lisbon Kaggle Meetup. The “Data Science Hands-on” Meetup took place at Instituto Superior Técnico (IST Campus) and it was precisely dedicated to cinema:

  • the problem addressed consisted in predicting movies’ revenue before their premiere!

This event was also sponsored by Xpand IT, in collaboration with Hackerschool Lisboa, a group of IST students interested in technology, who also evangelizes the practice of learn-by-doing.

First off, the event started with a presentation by Xpand IT’s own Ricardo Pires, who introduced the company and their units focused on data treatment and exploration. Participants received a sample of how these problems fit in a real-world context. Shortly after, professor Rui Henriques, who teaches Data Science at IST, explained his perspective on how to approach a Data Science problem, providing some tips related to the meetup’s challenge.

Data from this challenge leverage learning and provide an idea of a potentially real problem, as they are semi-structured and demand a great amount of effort to process.

An estimated 80% of Data Scientists’ daily work revolves around data treatment.

(Source: Forbes

After the two presentations, participants started to unravel the mysteries hidden within the data. They verified, for example, a generalized increase in revenue over the years. They also noticed that American movies had a superior revenue, compared to all the rest.

Tackling the challenge

On the first part, participants modelled the problem with simpler columns, structured as:

  • budget
  • popularity
  • runtime
  • data

By doing so, they’ve tried to obtain the first predictions for the movies’ revenue. On the image below, which represents Spearman’s rank correlation coefficient, we can verify that budget and popularity columns are the most correlated with revenue.

During the second phase, contestants tackled the semi-structured columns, applying the one-hot encoding technique, as:

  • director
  • cast

Through this deeper analysis of the data, teams found out that the movies that generated more revenue (see table below).

Other relevant aspect to consider is that popularity is not always directly related with revenue, such is the case with “Transformers: Dark of the Moon”, as it is represented as less popular, but with a high revenue nonetheless.

It is also interesting to observe the actors who generated more revenue on average:

Conclusions

At the end of the meetup, participants shared their implemented solutions:

  • The group with the best results applied Logistic Regression. Despite being a simple model, it can provide adequate results when the focus is data treatment.
  • Data treatment went through several techniques, such as detection of outliers, in movies with a very discrepant budget, replacing these values with the median.
  • Budget and revenue columns were transformed into their respective logarithm, in order to approximate them to a Gaussian distribution.
  • One of the advantages of using a simpler model is that these are also easier to explain to a business stakeholder.

The fourth of May was spent learning alongside the most wonderful people, enlightening in every way. In case you’re interested in Data Science, join the community and show up at our monthly events.

More information on the “Data Science Hands-on” Meetup.

Joana Pinto

Data Scientist, Xpand IT

Alexandre Gomes

Data Scientist, Xpand IT

Sara GodinhoMeetup Data Science Hands-on by Lisbon Kaggle: hot topics
read more

Bootstrap: Introduction to the world’s most popular CSS library

Bootstrap is the most popular HTML, CSS and JavaScript based framework for developing responsive, mobile-first websites.

With the successive growth of mobile devices in the world, it is becoming clearer that having a responsive website is a must, and by taking a mobile-first approach, this framework has been revealed as an indispensable tool and became more popular year after year, mostly because of its feature-rich nature and ease of use. One of the most essential aspects of this framework, which represents the foundation on which to build an organised, structured layout, is its grid. Bootstrap is built on a powerful 12-Column Grid System, which allows developers to arrange and align content in a fully customisable, responsive grid. The grid adjusts according to the device resolution or viewport size, making the website content interactable and pleasant for both mobile and desktop users.

Beyond this, Bootstrap offers a base style for most HTML elements, making the website look more polished, as well as an extensive list of pre-built, fully-responsive components that are easy to integrate and customise. In terms of customisation, Bootstrap lets you change the base style, including fonts, colours and sizes, as well as modifying the existing breakpoints used in grid layout by overriding the existing CSS rules with custom ones according to the project design.

For those who prefer to build a responsive website from scratch, without the assistance of any 3rd party libraries, and who use ready-made CSS code and components from previous projects to achieve this, or who may tend to have a more conservative approach towards accepting its framework features, Bootstrap can also offer great benefits.

So, what are these benefits of Bootstrap?

Well, where you have a project with a tight schedule and with multiple developers involved, Bootstrap offers consistency between projects and people (it represents a commonly known technology) as well as speed in development, thanks to its pre-styled classes, which require much less effort and time than when creating everything from scratch. It´s important to mention that Bootstrap has good cross-browser compatibility, being currently compatible with all the latest major browsers (Chrome, Firefox, Safari, Microsoft Edge and Internet Explorer 10+) and excellent support, thanks to the huge community behind it. And, most importantly, it´s completely free and open-source. Before looking at some examples, let´s see how easy is to get started with Bootstrap.

Keep reading
Diogo CardanteBootstrap: Introduction to the world’s most popular CSS library
read more

Practical guide to installing Kotlin

Time passes by and the programming language Kotlin has more and more fans, especially when we talk about Android programming. However, Kotlin is not limited to Android mobile apps development. It is either a programming language for the JVM or a programming language for the Browser or Native, without having to run in a virtual machine.

Kotlin is 100% interoperable with Java, which allows you to add code in Kotlin to a project that has been started in Java.

One of the great advantages of this language is the absence of NullPointerExceptions.

In a direct comparison with Java, it is possible to create the same classes using fewer lines of code.

If you were convinced by all of these arguments, or if you got curious about this language, download a quick guide on how to install Kotlin and plus some basic concepts.

Download kotlin installation guide

If you want to know more about the Kotlin programming language, we recommend reading this blog post: Kotlin and a brighter future.

Bruno AzevedoPractical guide to installing Kotlin
read more

Advanced Analytics: learn how to elevate data analysis to a whole new level

Implementing a business intelligence model requires more than just gathering data; overall, it’s really about converting big data and valuable insights to add value to the business. However, if there’s no model available that allows you to analyse and understand this incoming data, all you’ll get is meaningless numbers with no added value.

In order to perform a correct data analysis, it is necessary to understand that there’s no unique valid method of analysis; the process depends on needs and requirements and the type of data collected in order to determine the most suitable analysis methodology.

However, there are some methods common to most advanced analytics that are capable of turning data into added valu, even when there aren’t established business rules, transforming data agglomerates into relevant insights, beneficial to the business and enabling well-founded decision-making.

Quantitative data and qualitative data

Before covering the various methods, let’s identify the precise type of data you want to analyse. For quantitative data, the focus is on raw number quantity, as the name suggests. Examples of this type of data include sales figures, marketing data, payroll data, revenue and expenses, etc. Basically, all the figures that are quantifiable and objectively measured.

Qualitative data, on the other hand, is fundamentally harder to interpret, considering its lack of structure, more subjective and of an interpretive nature. At this end of the spectrum you can find examples such as collected information from surveys or polls, employee interviews, customer satisfaction questionnaires and so on.

Measuring quantitative data

Looking at the analysis of quantitative data, there are four methods capable of taking that very same analysis to the next level.

  1. Regression analysis

The choice of the best type of statistics will always depend on the main goal of the research.

Regression analysis is capable of modelling the correlation between a dependent variable and one or more independent variables. In data mining, this technique is implemented to predict values on a particular dataset. For example, it can be used to foresee the price of a certain product, while considering other variables. It can also be useful to identify trends and correlations between different factors.

Regression is one of the commonest methods of data analysis in the market for management purposes, marketing planning, financial forecast and much more.

  1. Hypothesis testing/significance testing

This method, also called “T-testing”, is capable of determining if a certain premise is true for the relevant dataset. In data analysis and statistics, only a statistically significant result would be considered from a certain hypothesis, resultant of a non-random occurrence. This procedure makes predictions regarding a certain quantity of interests present in a certain population, from a studied sample, using the theory of probability.

  1. Monte Carlo simulation

One of the most popular methods for calculating the effect of unpredictable variables from a specific factor involves Monte Carlo simulations, using probability modelling to defend against risk and uncertainty. To test a scenario or hypothesis, this simulation uses random numbers and data to simulate a variety of possible outcomes. This tool is frequently used for project management, finance, engineering and logistics, amongst other areas. By testing a wide variety of hypothesis, it is possible do discover how a series of random variables can affect plans and projects.

  1. Artificial neural networks

This computational model replicates the human central nervous system (in this case, the brain), allowing the machine to learn by observing data (so-called ‘machine learning’). This type of information processing replicates the neural networks, using a model of biological inspiration to process information and learn through analysis, simultaneously performing predictions. In this model, the algorithms are based on sample inputs, while applying inductive reasoning – extracting rules and patterns from large sets of data.

Sílvia RaposoAdvanced Analytics: learn how to elevate data analysis to a whole new level
read more

5 Business Intelligence books you have to read

At Xpand IT, we believe that business intelligence goes way beyond reports and dashboards. We are expert providers of BI solutions, developing projects with the ever-present goal of adding value to any business. Many companies have already placed their bets on data analysis software, recognising the huge potential that such insights represent to progress. However, there is still a small percentage of companies unable to recognise the proper value of internal data analyses and which, therefore, choose not to provide them to their clients. And so, we’ve picked 5 great business intelligence books for you to read, to help you discover more about adopting a complete BI strategy suited to your own situation. In this digital era, we’ve chosen physical formats to help you understand modern BI strategies that you can implement, going way beyond the standard pattern.

As stated by John Owen: “Data is what you need to do analytics. Information is what you need to do business.”

1. Business Intelligence Guidebook: From Data Integration to Analytics

1st Edition, November 2014

This is one of the more comprehensive books about business intelligence and data integration, touching on simple topics as well as vastly more complex architecture. The author guarantees that after reading this book you will be able to develop a BI project, launch it, manage it, and deliver it on time and to a budget. You will also be able to implement a complete strategy for your company – supported by the tools he introduces.

If you’re looking for a reliable source of information, capable of explaining the best practices, the best approaches, and presenting a complete overview of the entire life cycle of a BI project, adaptable for companies of any size, don’t look any further: this is the right book for you.

2. Data Strategy: How to Profit from a World of Big Data, Analytics and the Internet of Things

Bernard Marr – 1st Edition, April 2017

The author starts from the premise that less than 0.5% of all generated data is currently being analysed and used, building a compelling narrative to convince company leaders to invest in business intelligence strategies, focusing on the benefits for business growth.

Complemented with case studies and real examples, this book explains how to translate the data generated by companies into insights to support the strategic decision-making process. This aims to improve companies’ business practices and performance, with a vital combination of Big Data, Analytics and Internet of Things.

3. Agile Data Warehouse Design: Collaborative Dimensional Modeling, from Whiteboard to Star Schema

Lawrence Corr and Jim Stagnitto – 1st Edition, November 2011

This is a book for professionals looking to implement data warehousing and business intelligence requirements, turning them into dimensional models, with the help of BEAM (Business Event Analysis & Modeling) – an agile methodology for dimensional models that aims to improve communication between data warehouse designers, BI stakeholders and their development teams.

If you want to implement this methodology in your company or if you’re just curious about this approach, we strongly recommend you to explore this book, which includes, amongst other topics, subjects such as data modelling, visual modelling and data stories, using the 7 Ws (who, what, when, how many, why and how).

4. Successful Business Intelligence: Unlock the Value of BI & Big Data

Cindi Howson – 2nd Edition, November 2013

This is not the most recent edition, but the wealth of information it contains still makes it one of the best must-have business intelligence books you can read. The author, Research Vice President at Gartner and BI analyst, has conducted a study with the objective of identifying analytics strategies implemented by some of the biggest players in the market.

This book provides much more than just theory. It is a valuable manual that tells stories and lays out successful BI approaches, explaining why the strategies implemented cannot be the same for every company. Additionally, the book includes tips on how to achieve an adequate alignment between a company’s BI strategy and its commercial objectives.

5. Business Intelligence – Da Informação ao Conhecimento

Maribel Yasmina Santos and Isabel Ramos – 3rd Edition, September 2017

This is the only Portuguese book on our list, and it’s very comprehensive, explaining the basic concepts of data analysis and demonstrating how BI technologies can be implemented – from the data warehouse storage process to the analysis of the data (online analytical processing and data mining), outlining how the resulting knowledge can be used by companies to support decision-making.

An essential book, whether you’re a professional searching for a complementary source of information or you’re simply looking for reasons to implement a business intelligence strategy in your company

If you would like to know more about some of the topics mentioned above, or if you want to implement your own BI strategy, get in touch with us today!

Ana Lamelas5 Business Intelligence books you have to read
read more

ITIL: sound practices to improve your IT service management

ITIL is an acronym for Information Technology Infrastructure Library, a set of good practices designed to facilitate a significant improvement to the operation and management of all the IT services within a company. When implemented by an organisation, this set of practices becomes an unequivocally beneficial asset, as it comes with several advantages, such as the improvement of risk management, the strengthening of client relationships, an increase in productivity and reduced costs.

Developed in 1980 by the Central Computer and Telecommunications Agency (CCTA) – a British government agency – it is the primary framework for sound IT Service Management (ITSM). It began with more than 30 books comprising numerous sources of information, and describing good practices to follow in relation to IT services. Currently, ITIL runs to 5 books covering its various processes and functions (and a total of 26 processes that can be adopted by companies).

In 2005 the framework was finally formally recognised and given the ISO/IEC 20000 Service Management seal of approval for compliance with desired standards, and for being truly aligned with Information Technology best practice.

ITIL went through various revisions and there are now 4 different versions, with the most recent being released at the start of 2019. This updated version maintains a strong focus on automating processes in order to maximise professional time and the business integration of IT departments, in order to improve communication between teams and technical and non-technical staff. Version 4 features new ways to tackle the challenges of modern technology and its main goal is to become ever more agile and cooperative.

Reading current books on the subject simply won’t give you enough background to effectively implement ITIL for your company, however. You need to engage professionals dedicated specifically to the field, and guarantee adequate training and certifications for both the company and these professionals. Current certification, in accordance with the 4th version of ITIL, is divided into two levels: ITIL Foundation and ITIL Master – each one with its own unique examinations and programme content. There are two options under the ITIL Foundation module: ITIL Managing Professional (which certifies an ITIL specialist), and the ITIL Strategic Leaders certification (encompassing both ITIL Strategist and ITIL Leader certificates). After completing foundation accreditation, you can then leap into master level – the highest certification available in ITIL 4. You can review the full scheme using the table below:

ITIL

ITIL is divided into five major areas – Service Strategy, Service Design, Service Transition, Service Operations and Continual Service Improvement – and each area has individual processes. Although this framework provides 26 processes in total, companies are not obligated to adopt them in their entirety. It is up to the IT professionals and ultimately the CTO to define appropriate procedures to integrate into teams. Below you can find some examples of the most commonly used processes:

ITIL
Ana LamelasITIL: sound practices to improve your IT service management
read more