Over 3000 developers to choose from

Since 2002, Azati Software has been a trusted partner for businesses seeking top-tier talent in software development. Our vetted developers, architects, designers, and project managers are dedicated to delivering exceptional results tailored to your needs.

With over two decades of experience in the industry, our commitment to excellence and unparalleled expertise has earned the trust of companies worldwide. We understand the unique challenges of each project and work closely with our clients to ensure successful outcomes that drive growth and innovation.

Let us help you accelerate your roadmap and achieve your business goals with our skilled team.

Poland Poland
Hozha 86/410 , Warszawa, Mazowieckie 00-682
+48222084843
United States United States
184 South Livingston Avenue, Suite 119, Livingston, New Jersey, New Jersey 07039
$25 - $49/hr
250 - 999
2002

Service Focus

Focus of Software Development
  • Java - 25%
  • PHP - 25%
  • Javascript - 25%
  • AngularJS - 25%
Focus of Artificial Intelligence
  • Machine Learning - 20%
  • Generative AI - 20%
  • Computer Vision - 20%
  • Recommendation Engine - 20%
  • AI Consulting - 20%
Focus of Web Designing (UI/UX)
  • Website - 100%
Focus of Testing Services
  • Automation Testing - 50%
  • Interface Testing - 50%

Azati's exceptional Maintenance & Support services give clients a considerable advantage over the competition.

Focus of DevOps
  • Gradle - 25%
  • Git - 25%
  • DevOps Automation - 25%
  • DevOps Consulting - 25%
Focus of Big Data & BI
  • Data Analytics - 100%
Focus of Business Services
  • HR - 100%
Focus of Implementation Services
  • ERP Consulting - 100%

Industry Focus

  • Advertising & Marketing - 10%
  • Automotive - 10%
  • Business Services - 10%
  • Financial & Payments - 10%
  • Hospitality - 10%
  • Telecommunication - 10%
  • Real Estate - 10%
  • Transportation & Logistics - 10%
  • Insurance - 10%
  • Oil & Energy - 10%

Client Focus

100% Small Business

Detailed Reviews of Azati

No Review
No reviews submitted yet.
Be the first one to review

Client Portfolio of Azati

Project Industry

  • Food & Beverages - 12.5%
  • Information Technology - 12.5%
  • Oil & Energy - 6.3%
  • Healthcare & Medical - 12.5%
  • Automotive - 6.3%
  • Hospitality - 6.3%
  • Telecommunication - 6.3%
  • Enterprise - 6.3%
  • Government - 12.5%
  • Banking - 12.5%
  • Retail - 6.3%

Major Industry Focus

Food & Beverages

Project Cost

  • Not Disclosed - 100.0%

Common Project Cost

Not Disclosed

Project Timeline

  • Not Disclosed - 87.5%
  • 1 to 25 Weeks - 12.5%

Project Timeline

Not Disclosed

Clients: 10

  • Citizen
  • Doxa
  • Aptagen
  • MedPro Group
  • GeoLytics
  • TopHotelProjects
  • Marketboomer
  • CIB
  • Digatex
  • Clarivate

Portfolios: 16

AI Calorie Calculator and Food Recognition

AI Calorie Calculator and Food Recognition

  • AI Calorie Calculator and Food Recognition screenshot 1
Not Disclosed
25 weeks
Food & Beverages

Our Data Scientists have successfully implemented a prototype system into an already functioning calorie counting application, that can instantly estimate the calorie content of complex dishes by images analysis. Such a solution can be useful in such domains as agriculture, catering, sports or even for everyday life.

OBJECTIVE
In recent years, it has become possible to use deep learning to recognize objects in images with high accuracy. We realized Azati could apply new technologies to the problem of food quality estimation to simplify the process and provide the user with the fastest and most efficient result.

SOLUTION
While solving challenges we developed a small script written in Python. The prototype takes an image as the input and returns a set of frames where each component is circled with a square in which calories are indicated, and the total result of the whole dish is displayed. Check out the screenshots below to see how the results look.

RESULTS
We made it possible to process images of compound dishes and calculate their calorie content using machine learning and computer vision. The model recognizes each product quite accurately and distinguishes one component from another. For more complex object classification, the model requires additional data and extra training.

A Secure LLM for Enhanced Information Sharing

A Secure LLM for Enhanced Information Sharing

  • A Secure LLM for Enhanced Information Sharing screenshot 1
Not Disclosed
Not Disclosed
Information Technology

Objective

The primary purpose of the project was to develop an analog for ChatGPT by using an open-source language model framework. This venture provides employees with a very comfortable communication tool and at the same time keeps the confidentiality of the corporate data. The efficiency of our communication is improved; at the same time, the company is prepared to meet the new demands of the present-day business environment in a better way.

Process

  1. Choosing a Suitable Language Model (LLM):
    The process began with a review of open-source language models considering factors like performance, accuracy, and suitability for corporate use. Popular models such as BERT and GPT 2, among others, were evaluated to determine which one aligns best with the project objectives.
  2. Fine-tuning using LoRA (Low-Rank Adaptation):
    After the base LLM is chosen, domain-tuning is the next step. This is accomplished via the expansion of the model to satisfy the needs of the application setting which include terminology, business needs, and other business characteristics.
  3. Quantization:
    After fine-tuning, the quantization — an optimization technique that reduces the model’s memory footprint — was made. It was important since in an enterprise setting the computational resources might be very limited.
  4. Enhancing the Retrieval-Augmented Generation (RAG) Approach:

    To optimize interactions enhancements were made to the RAG approach to methods:

    • Addressing questions: Enhancing response generation for queries.
    • Multiquering: Expanding capabilities to handle queries simultaneously.
    • Parent retrieval: Improving retrieving parent queries efficiently.
    • Hypothetical questions: Introducing questions to enrich content creation.
    • Keyword and topic extraction: Enhancing keyword and topic extraction procedures.
    • HyDE (Hybrid Data Enhancer): Employing a technique to enhance data quality.

    These measures not only tailored the model to fit requirements but also boosted its performance in query handling and content generation.

Solution

The solution to the task involved the following key steps:

Development of an Independent Internal Service:
By choosing an LLM and using techniques, like tuning, quantization, and an enhanced RAG method, an independent internal service was successfully developed. This platform offered a smart way of sharing information designed to meet the needs of the business world.

Ensuring the Security of Corporate Data:
To safeguard information, security protocols were established across different tiers. These measures encompassed encryption techniques, stringent access controls, and additional technologies aimed at guaranteeing defense against data breaches.

Integration with Confidential Data:
In the end, the developed service was smoothly connected with the organization’s databases allowing easy data exchange among staff members. The main goal in the meantime was to customize the service according to the organization’s data structures and specific needs.

Testing and Performance Evaluation:
Following the development and integration of the service, various tests were carried out to evaluate its performance and efficiency. These assessments included real-world usage scenarios, analysis of response times, and validation of adherence to security protocols.

Employee Training and Implementation:
To ensure an implementation of the service, comprehensive training sessions were conducted for employees. This involved getting acquainted with the interface, understanding its functionalities, and receiving guidance on usage practices.

Because of these steps, the successful creation and implementation of an independent internal service were achieved, completely replacing ChatGPT and ensuring the secure handling of confidential corporate data.

Oil&Gas meters processing with Artificial Intelligence and Computer Vision

Oil&Gas meters processing with Artificial Intelligence and Computer Vision

  • Oil&Gas meters processing with Artificial Intelligence and Computer Vision screenshot 1
Not Disclosed
Not Disclosed
Oil & Energy

Azati helped a Canadian customer develop an AI-powered service for automatic data processing from meters that measured produced oil & gas resources using machine learning and computer vision technologies.

Customer

Petroleum products are the basic fuel for most types of transport, as well as raw materials for chemical production. Natural gas is one of the best types of fuel for domestic and industrial needs, polymers are made from gas, and helium is released, which is used in the production of high-precision equipment and in the space industry.

The oil & gas industry plays a leading role in the economy and is closely linked to other industries. This is a complex system, including the raw materials extraction, the production of fuel purification and its further processing. An important role in this redistribution is given to modern specialized technologies.

A Canadian oil & gas customer service company turned to Azati to automate reading data from meters.

Objective

Today, any industrial complex of the oil and gas industry must be fully automated. So numerous controllers, meters and block modules are created. Production automation leads not only to decrease the influence of the human factor but also to increase efficiency.

The customer turned to us to search and develop approaches to automate the data reading from graphs that are printed by equipment for accounting for extracted resources (meters).

The task included graphic information processing(recognition and reading of curves on the graph), printed data (stickers with printed text, graphics, such as barcodes), as well as handwritten data (dates, numbers, notes from equipment operators).

Process

We started the project by creating a successful pilot prototype for reading curves on a graph. This task turned out to be feasible, and we started development to recognize other aspects and details of the input data (graphs).

The Canadian side was involved in project management, prioritization of activities, and coordination of the delivery schedule. The client, in turn, decides how closely the potential result meets the project’s goals and is suitable for marketing analysis.

Solution

The project was done by a team of ML specialists.

The product is a set of services that receives a scan of the input data and the expected result on the output, recognized and calculated by the developed model based on artificial intelligence.

The services were integrated into the client’s infrastructure and launched in the cloud.

The developed services included the following functionality:

  • 90% accuracy of barcodes processing
  • Above 80% accuracy of line processing
  • Processing of handwritten data (dates and numbers) varied greatly from the quality of the input data, from 30 to 70+%. Everything rested on the human factor, i.e. the accuracy of the data in the form fields, blots and corrections, handwriting features.
NLP Solution For Pharmaceutical Marketing

NLP Solution For Pharmaceutical Marketing

  • NLP Solution For Pharmaceutical Marketing screenshot 1
Not Disclosed
Not Disclosed
Healthcare & Medical

Customer

Health is the highest value, but unfortunately, only a few think so. Drug manufacturers, pharmacies, doctors have to make a lot of effort to promote the idea of ​​health to create a desire to be healthy. You can effectively promote health only if you have a good understanding of people and their motivation. These are the problems that pharmaceutical marketing is working on.

Pharmaceutical products are pretty sophisticated for understanding by non-experts in Healthcare, so conventional marketing is hardly applicable to them. Therefore direct pharmaceutical marketing is more appropriate for such aims. Simple marketing is used when small earnings are needed, pharmaceutical companies are looking for ways to make much more profit and prefer advanced marketing.

Our customer is an entrepreneur with extensive experience in the marketing business who turned to us with a unique idea to help pharmaceutical companies increase the level of trust of ordinary buyers who know little about the drug’s composition.

Objective

Pharmaceutical marketing, in this case, is not just the sharing of information in different sources but also the quality and compliance with the declared one. And in healthcare, it is sometimes difficult for an ordinary person to understand which pharmaceutical composition will be better or more appropriate. Hence, people often buy “popular” medications that they hear about.

The customer turned to us with a unique idea, which had not yet been on the market. The essential purpose was to facilitate the search and comparison of the required product with the recommendations of doctors based on questionnaires and insights from professionals.

Our task was to develop an MVP using Artificial Intelligence and Machine Learning technologies to build assessment reports for pharmaceutical companies. Azati’s team studied models and tools for solving ML tasks, including speech-to-text, text mining, finding similar phrases and mismatches using NLP.

We also learned data visualization tools to present the resulting reports in an understandable form.

Process

Development process step by step:

Stage 1.

It was necessary to decide how to bring all the input data to a standard format with which the AI ​​algorithm can work. We had to solve speech-to-text recognition issues, model training for specific terms, solve the nuances of punctuation and capitalization of subtasks within the framework of NLP.

Stage 2.

We solved problems directly related to NLP by analyzing input text data, finding common phrases, segmenting phrases by topics, assessing sentiment analysis, and calculating scores for found phrases.

Stage 3.

On this stage we solved the problems of grouping the obtained data into thematic reports that highlight one or another aspect of analyzing the respondents’ answers.

Stage 4.

At the final stage, we considered the means for visualizing the received reports and finding a simple and intuitive tool for sharing and demonstrating the results and main conclusions to potential customers.

Solution

As a result, we have developed an ML model that can analyze data from medical questionnaires, find insights, and build reports for the end-user within the existing pharmaceutical domain.

We have built an ML model that can automate the following tasks for marketing analysis:

  • Analyze questionnaires from a group of doctors, and identify similar answers;
  • Compare and correlate the aspects voiced by doctors with the offers in the marketing strategy of pharmaceutical companies;
  • Generate reports where users can see what doctors-practitioners are talking about and what they value regarding the specified pharmaceutical product (effectiveness, safety, usage, etc.) and how the marketing strategy for this product can be improved.
Cloud System for Document Digitization

Cloud System for Document Digitization

  • Cloud System for Document Digitization screenshot 1
Not Disclosed
Not Disclosed
Automotive

Custom system for engineering drawings digitization powered by artificial intelligence to extract data from on-paper maps, schemes, and other technical documents.

Customer:

Together with our Strategic partner DIGATEX, we combined our software and data science skills and their domain knowledge of engineering data management to create DI-analytics, a unique solution to digitizing vast amounts of advanced documents for customers who own and operate complex assets such as oil refineries and offshore production facilities.

One of the first customers for this solution is a South East Asia corporation that explores and manufactures petrochemical products. The company is ranked among Fortune Global 500’s largest corporations in the world with business interests spanning 35 countries.

Due to specific business demands the customer regularly has to digitize vast amounts of advanced documents. The service was provided as an outsourced process comprising document processing, data extraction and collation.

Objective:

The objective was to build a solution for digitizing a large number of complex documents in the shortest terms. The majority of documents were pipeline layouts, industrial plans, manufacturing schemes and maps obtained from the third-party vendors and partners.

Process:

After initial research, we figured out – that no existing technology could help us to overcome the customer challenges. Several companies provide similar services, but their products are entirely unsuitable for documents with flexible structures and industrial maps.

Our engineers DECIDED TO build a custom Optical Character Recognition (OCR) Engine powered by Artificial Intelligence.

AI was a good option – it acts like a human, and it uses the same algorithms and methods while searching the data patterns in the document as the human does.

The solid scientific background helped our engineers to build MVP in less than two weeks. We immediately requested the first documents from the customer and got a predictable result, that impressed the customer.

We processed about 10.000 documents in less than 8 hours with an average accuracy of 84%.

Since that moment, we have been tuning algorithms and improving the performance of the system.

Now the accuracy of extracted data is close to 97%.

Solution:

The final application is the entirely modular system, hosted in the secure enterprise cloud. All ongoing tuning and maintenance are entirely remote, which helps the customer avoid on-site personnel training and cut down maintenance costs.

We are proud to say, that a small group of neural networks powers every single module, and all the modules form a unique artificial intelligence that takes the document as the input and provides the accurately extracted data as the output.

As artificial intelligence is hosted in the cloud, it can be easily managed from any place. If the customer wants to process a considerable number of documents in the shortest terms, we can enable the required resources in several minutes and handle any number of documents.

Voice-Command-Based Restaurant Operations Management

Voice-Command-Based Restaurant Operations Management

  • Voice-Command-Based Restaurant Operations Management screenshot 1
Not Disclosed
Not Disclosed
Food & Beverages

Idea

The project idea revolves around enhancing the dining experience through innovative technology. Each table is equipped with a sophisticated sound system allowing customers to naturally issue commands like: “call the waiter”, “give the bill”, “bring the bread”, “provide the menu”, etc.

These commands seamlessly integrate into the control system, where they undergo interpretation, context analysis, and the extraction of any necessary supplementary details. Automatically, tasks are generated, assigned to the right staff member, and precise timers are set for performance tracking.

Once assigned, tasks are sent to the staff member’s device with voice commands for specific actions, creating a personalized to-do list. Reminders are sent when the timer runs out.

Conversely, the waitstaff also engage with the system through vocal interactions. For instance, they might say, “I’ve taken an order for table 7: one black coffee and one croissant,” or “Please arrange a taxi for table 3”. And the task will be created. Additionally, they provide timely status updates on completed tasks, such as “The bill has been settled for the guest at table 5,” triggering automatic task completion in the queue.

All information is duplicated on internal resources, dashboards, and screens. The system allows monitoring current processes, quickly finding bottlenecks, identifying and fixing problems.

Objective

The project aimed to develop and implement a system utilizing machine learning to recognize and analyze speech from both restaurant employees and customers. Key tasks involved converting speech into text, extracting commands and their attributes, and discerning when commands were completed.

Furthermore, the project successfully implemented the application architecture, established a task management system, and enabled seamless communication between the server and wireless devices for prompt command processing.

Part of the analytics and optimization includes analyzing the efficiency of all processes, identifying weaknesses and bottlenecks in the system, which allows improving its productivity and operational efficiency.

Process

The journey of crafting a speech processing system involved numerous stages where our accumulated expertise proved invaluable. Commencing with speech recording and digitization, we progressed to the audio-to-text transformation, a pivotal phase in translating audio data into comprehensible information for the system. Subsequently, we delved into text content analysis, deciphering not just the words but also the emotions and intonations, enabling a profound grasp of the statements’ meanings and users’ needs. This meticulous, multi-step process served as the bedrock for the creation of a speech processing system that is both precise and highly responsive.

Solution

To implement the ML part of the project, we deployed and configured machine learning models. This process involved training the model to identify several dozen core commands, which required collecting and annotating a large amount of data. We then conducted preliminary testing and verification of the model, creating a Proof of Concept (POC) to ensure its functionality and effectiveness. Finally, we successfully demonstrated the developed ML system, showing its ability to accurately recognize and process a variety of commands, which confirmed its readiness for integration into the overall project architecture.

Results

We verified and showcased the functionality of the essential components within the machine learning domain: speech analysis, command identification, interpretation, attribute search, and task formulation. A proof of concept (POC) was meticulously prepared and presented to the client, complete with detailed calculations and a comprehensive commercial proposal.

Data-Driven App & Portal for Hospitality Industry

Data-Driven App & Portal for Hospitality Industry

  • Data-Driven App & Portal for Hospitality Industry screenshot 1
Not Disclosed
Not Disclosed
Hospitality

Azati designed and developed a group of 5 interconnected applications that provide a real-time service, which redefines the traditional flow of ordering & payment in venues, meanwhile working to the benefit of both venue owners and visitors.

CUSTOMER

A Canadian startup company focused on business automation and digital transformation. The customer decided to make the processes in line with the current highly technological reality.

The customer wanted to empower bar visitors. Allow clients to buy entrance tickets and place orders paying within seconds.

Another opportunity lies in empowering venue owners with an internal business application. This application processes orders and payments and tracks sales statistics reducing costs and attracting more clientele.

OBJECTIVE

The customer builds 5 interconnected applications that are plugged into one robust ecosystem.

These applications include:

01. Two mobile apps (iOS and Android) for guests

02. The application for waiters and bartenders

03. The web portal for a venue owner & administration

04. CRM for the customer to manage business accounts

PROCESS

We refactored the code, covered it with tests, and created proper documentation. We also optimized and improved application response time, and the database structure, and eliminated the problems 3rd party integrations.

We made a successful migration to another cloud hosting. Alongside this, we set up the on-demand group of clusters, which would start working automatically as the number of requests (load) increases. Thus, the whole system now works safely and reliably — even on days of an intense load.

To ensure transparency, speed and high quality of work, we established the development processes, client request tracking and communication, which hadn’t existed before on this project.

It was important for our customers to test the hypotheses before making the final decision on whether to introduce it in the app or not. We’ve used focus group testing techniques. Our team kept in mind that new features could be added anytime as well as the existing ones could be eliminated to achieve maximum flexibility.

The customer wanted the mobile app for venue guests to be published on the App Store before the so-called “patio” period begins. We adopted continuous integration and delivery, so the team started releasing a constant flow of software updates into production.

RESULTS

  1. Connected five separate applications into one robust ecosystem
  2. Migrated to a robust, scalable, and cost-effective operation environment
  3. EFFECTIVELY Launched the consumer mobile application on App Store
  4. ElIMINATED KNOWN BUGS, Improved quality, and reliability of the solution
  5. Developed new highly requested features matching strict deadlines
  6. Established transparent development processes and communications
  7. Delivered customer value with continuous integration and delivery
BI and DWH Services for Telecommunication Provider

BI and DWH Services for Telecommunication Provider

  • BI and DWH Services for Telecommunication Provider screenshot 1
Not Disclosed
Not Disclosed
Telecommunication

Customer

The Customer is one of the leading US entertainment companies that owns and operates several cable channels, digital platforms, and a streaming service, and has a vast network of well-known sub-brands. For over thirty years, our Customers is providing high-quality content and entertaining services to the viewers.

Primary Responsibilities of Azati:

  • Build and deliver Reporting and Querying Software
  • Online Analytical Processing
  • Creating Interactive Dashboards
  • Data Mining and Cleansing
  • Business Activity Monitoring
  • Data Warehousing
  • ETL Process Enhancement

We helped to deliver

Audience Planning Platform

Azati helped the Customer to create a custom audience planning platform with a feature-rich web interface. The solution uses the massive datasets provided by Nielsen, Prism Intelligence, Claritas, and 3rd parties to plan audiences both for TV ads and digital marketing campaigns.

While traditional television ads mostly rely on historical demographic data like sex and age to maximize impressions, the solution allows users to plan audience segments using about 100 various options, including demographic data, interests, location, income, behavior, and many others. The solution’s primary purpose is to simplify audience planning and make the experience similar to the one provided by Facebook and Google.

The platform for Audience Buying and Advanced Targeting

The team helped to create a solution that allows buying target audience segments in a quick time. The platform is powered by machine learning and sophisticated data-management techniques that pull audience data from many sources and let users plan, manage ad buys through a unified interface.

The main idea of this solution is to create a workflow where a publisher selects the audience, provides ads creatives, plans a schedule, creates a marketing campaign, and submits it to the manager. The manager could approve or reject the campaign pointing to some moments that require additional work.

Reporting Software for VOD (Video-On-Demand)

Azati assisted in developing the business intelligence solution for the analysis of the primary metrics of subscription video. The solution analyses many parameters like Impressions, Play Rate, Click-through Rate, Engagement, Churn, Acquisition, Retention, etc.

The main idea of this solution is to find the best time and placements for new content releases to maximize audience coverage.

Value for the Customer

The solutions help the Customer to minimize human involvement in marketing analytics. The main idea standing behind all the work is to understand the best time to release new content and help the advertisers reach the right audience with the right message. Usually, this work is traditionally carried out manually by the data analysts, which provide the necessary information to the marketing department.

As the amounts of data are growing every year, it becomes more challenging to handle colossal quantities of information for the analysis team – the workflow requires automation. This is the main reason why the Customer determines and allocates budget for the research and development program related to business intelligence and data warehousing.

Azati is involved in a considerable part of the R&D program, as the team helps to deliver several products that are admired both by the investors, regular employers, and advertisers.

There are multiple benefits of R&D program for the Customer:

#1: Cost and Time Saving

It is costly to extract and process big data manually, so it is easier to create small scripts that automate repetitive actions instead of doing everything by hand in Excel. These scripts are later combined into solutions with pretty impressive user interfaces.

#2: Ease of Access and Maintenance

Before Azati was involved in the project, it was necessary to make many requests to multiple data sources to get complete information. Now the data is located in centralized storage and can be easily accessed by employers and trusted third parties at any time.

#3: Stock Price Management

The R&D program increases the stock market price of the company. It delivers the applications that revolutionize the industry by bringing new technologies and ways to handle big data and making traditional TV marketing more performance-driven.

#4: Public Relations Boost

The usage and development of cutting-edge technologies help the Customer to attract the talents to join the development team.

Transforming Personnel Management: Automating Candidate Selection for Project Success

Transforming Personnel Management: Automating Candidate Selection for Project Success

  • Transforming Personnel Management: Automating Candidate Selection for Project Success screenshot 1
Not Disclosed
Not Disclosed
Enterprise

Objective

In the modern world, successful personnel management is one of the key factors for any company’s success. Employee recruitment stands as a central aspect of this process. Significantly streamlining this facet can be achieved through process automation.

The main objective of such an approach is an automated selection of best-suited employees for current or potential projects. It enhances the recruitment process, reducing time spent on candidate search and selection while increasing the likelihood of project success, as tasks become more efficiently regulated with more suitable resources and competencies employed. In addition, automation accelerates the analysis and resume screening process, ensuring the prompt assessment of potential candidates and the project team formation.

Process

Our approach to processing resumes and project descriptions underwent significant transformations as we refined our methodology. Initially, we followed a classical method, extracting and storing resumes and project descriptions in a relational database. However, as the project moved forward, it became apparent that this approach was not well suited to handling unstructured data like CV text. To address these limitations, we implemented LLMs to extract relevant information, significantly increasing our ability to process resumes and project descriptions effectively.

We developed a process for vectorizing and storing the collected information in a vector database. This enabled us to rapidly and flexibly execute candidate filtering based on specific job opening requirements.

However, despite this improvement, the process of working with filtered candidates still required considerable manual effort from the recruiting department. To further optimize our approach, we added a Virtual Assessment layer, which leverages LLMs to analyze how well a specific candidate is suited for a specific job, providing a short explanation to support the evaluation.

Solution

Our solution is a custom-built microservice that leverages Large Language Models (LLMs) to automate candidate selection for projects. This innovative approach enables efficient information extraction, semantic search, and accurate decision-making.

The microservice utilizes LLMs to analyze large datasets of resumes and project descriptions, extracting relevant information such as skills, experience, and education. By storing this data in vector format, we enable rapid and flexible candidate filtering based on specific job openings.

A key advantage of our solution is its customizable relevance filtering. Users can adjust the importance of various criteria — such as knowledge, experience, education, and skills — for each project, ensuring that the most relevant candidates are selected. This flexibility enables the microservice to provide a tailored set of candidates for each project, facilitating the rapid creation of high-performing teams.

Technical Highlights:

  • Our microservice employs state-of-the-art LLMs for information extraction and candidate assessment.
  • Data is stored in vector format enabling rapid and accurate candidate filtering through semantic search via vector similarity scoring.
  • The system features customizable relevance filtering, allowing users to adjust criteria weights (e.g., knowledge, experience, education, skills) for each project.
Semantic Search Engine for Bioinformatics Company

Semantic Search Engine for Bioinformatics Company

  • Semantic Search Engine for Bioinformatics Company screenshot 1
Not Disclosed
Not Disclosed
Healthcare & Medical

Customer:

A US Company focused on the development of in vitro diagnostic (IVD) and biopharmaceutical products. It provides products and services that support research and development activities and accelerates the time to market products.

The customer offers clinical trial management services, biological materials, central laboratory testing, and other solutions that enable product development and research in infectious diseases, oncology, rheumatology, endocrinology, cardiology, and genetic disorders.

A lot of companies suffer from the lack of accurate and fast search engines that can handle substantial scientific datasets. Scientific datasets are known for their structural complexity and a vast number of interconnected terms and abbreviations that make data processing quite tricky.

The customer was looking for a partner who could overcome this challenge.

Objective:

The customer wanted us to build an intelligent search engine that could help him deal with the internal inventory search. The inventory included a considerable number of blood samples. Each blood sample was described using several tags, grouped into subcategories, which were grouped into larger categories, etc.

Customer’s employees were forced to select many tags by hand to get the information they wanted. It took several minutes to perform a single search. And what was more disappointing, if an employee makes a single mistake or provides an inaccurate query, he or she will get an empty result page.

The entire data lookup process was a huge disappointment and headache for the personnel and the customer.

There were several challenges to overcome to improve the customer’s workflow.

Process:

From the very beginning, the customer provided a list of keywords that describe blood samples. Very soon our team discovered that this list was incomplete and required additional research and it was not enough to complete the project. Similar issues can’t deny our team from completing the project in time.

This way we decided to split the final solution into two pluggable modules. One for intelligent matching, it determined the level of confidence while tagging a blood sample. Another to extract all possible tags from search queries. De facto the second module transferred unstructured user input into structured data.

The first challenge our engineers overcame was the lack of sample data. We trained the custom model based on a hundred thousand life science documents related to blood samples from the open data sources. Data Scientists used Word2vec to analyze the connections between the most common words from the thesaurus to find synonyms and determine how these words are related to each other.

As a result, the model could automatically analyze the description and tag of blood samples with a high confidence level — close to 98%.

The module responsible for entity detection in search queries was partially ready. We had already built a similar module while developing a platform for custom chatbot development. All that was left to do was retrain the model according to the list of entities: sample types, geography, diseases, genders, etc.

To achieve a high level of confidence, we analyzed the massive number of user search queries collected from the open data sources. In the end, we compiled a collection of patterns used to form search queries.

Solution:

The final solution consists of three separate interconnected modules hosted in the cloud. Such an approach helps us to maintain the system remotely to avoid on-site personnel training. Cloud architecture makes the application more flexible, cutting down development and maintenance costs.

Road Pothole Detection With Machine Learning And Computer Vision

Road Pothole Detection With Machine Learning And Computer Vision

  • Road Pothole Detection With Machine Learning And Computer Vision screenshot 1
Not Disclosed
Not Disclosed
Information Technology

As part of Azati Labs, our data scientists have successfully built a prototype of the system, that can detect road defects by analyzing images and videos. This prototype can be useful to the municipal government to simplify road defect detection to calculate road repair costs automatically. The information extracted by the prototype can also be used by automotive manufacturers to help smart cars avoid potholes and decrease overall repair costs.

PROJECT IDEA:

Road defects are one of the most common reasons for suspension repairs and tire replacements. According to general statistics, even the most careful drivers face suspension repair every three years or so. Bad roads are a well-known issue in all East-European countries.

The goal of this project was to train computer vision to determine road defects, especially potholes. There are a vast number of pre-made machine-learning models for object detection and classification trained by Google and Facebook. Unfortunately, there were no pre-made models for pothole detection. This way we decided to train a custom model from scratch.

PROTOTYPE DESCRIPTION:

While solving these challenges we developed a small script written in Python. The prototype takes an image or video clip as the input and returns a set of frames where the potential potholes and other road defects are outlined with squares. If the script takes a video, it splits it into a set of frames and examines each frame separately. When a script processes all the data, it joins all frames into one video. Here are how fancy clips about computer vision are made. As a result, we get an image or a video, where the potential potholes and other road defects are outlined with squares. Check out the screenshots below to see what the results look like.

RESULTS:

The development process was quite challenging for our data scientists. We made it possible to find potholes and other road defects using machine learning and computer vision and delivered the proof of concept prototype. The model finds the road defects quite accurately, but there are some issues with the classification of defects. It is quite hard to identify a pothole if it looks similar to a road hatch. For complex object classification, the model requires additional data and extra training. The more data we provide, the more accurately a model classifies road defects. But as the data requires manual mapping, it takes a lot of time and makes data processing and cleansing quite expensive.

Reporting Platform for the Local Municipal Government of Canada

Reporting Platform for the Local Municipal Government of Canada

  • Reporting Platform for the Local Municipal Government of Canada screenshot 1
Not Disclosed
Not Disclosed
Government

Customer:

From the very beginning, we did not work with the customer directly. All communications were made through our partner – a company focused on business automation and digital transformation. Our partner is software development agency in Toronto, so it was easy for them to communicate with the customer on-site. A customer wanted us to solve one day-to-day issue.

Every city relies on complex sewage treatment. These systems are cleaning out blackwater to avoid ecosystem pollution. Maintaining the whole process is a complicated, but essential task: if something goes wrong, it will end with complete disaster both for the government and for citizens.

Since industrial water treatment contains several steps, there is a considerable number of different equipment used during the process. So, it is challenging to maintain this equipment: one time per day, a group of engineers inspects every device to ensure that everything works fine.

And, well, the inspecting process requires reporting. The customer wanted us to automate the existing reporting process and make it in line with current highly technological reality.

Engineer writes a report for every water pump he checks. As there are hundreds of pumps involved in the whole treatment process, engineers are wasting almost half of their time by doing excessive paperwork. It is easy to calculate how much a single report costs.

Objective:

After initial analysis, we figured out the best way to solve the problem – develop a custom questionnaire application. Engineers are familiar with the questions extracted from their daily reports. So, it does not require any additional personnel training.

In This Way, We Started Developing Several Applications:

01.Data-processing application

Let’s call it “Backend”. The main idea of this app was to collect information from mobile clients and store it in the database (CRUD functionality). Also, this app generates reports later used by the customer to understand the actual state of the equipment.

02. iOS and Android clients

Two client-side applications, which engineers use to complete polls. As these applications have the same functionality, our engineers decided to build it with a cross-platform framework–React Native. This approach helped us to cut down development and overall costs.

Yeah, this plan looks straightforward to implement, but there were several tech-related challenges we faced. Let’s have a closer look.

Results:

We built custom reporting software that collects data from many mobile clients. Some mobile clients do not send information immediately, but from time to time synchronize with the customer-hosted backend. One per day, the application downloads the latest poll templates and enriches the existing answers with default values if these answers miss some crucial fields.

Data Bus Development For Governmental Corporation

Data Bus Development For Governmental Corporation

  • Data Bus Development For Governmental Corporation screenshot 1
Not Disclosed
Not Disclosed
Government

Customer

One of our partners hired Azati to help a governmental corporation from the energy sector to optimize document management.

The end Customer wanted Azati to help with the integration of a third-party application. With this app users can convert hundreds of their frequently used and standardized documents into several interactive templates to reduce the time for creation, processing, transfer, and delivery of the basic documents that have similar structures.

Furthermore, the application enables users to eliminate errors in document construction, reduce the time of closing a deal and increase the overall efficiency of the company’s document workflow.

Objective

The main project idea was to organize the data exchange between two components: SAP ERP and the document generation system. The primary task of Azati was to develop the bus for data transfer between these components and implement the correct business logic for efficient data processing workflow.

In other words, our developers had to create a technical solution that allows converting the incoming message into a format corresponding to the next step of data processing, taking into account various data transmission methods (incoming data processed via SOAP protocol while the outgoing via HTTP). Most often, data conversion occurred from XML format to JSON extended by the rest of the meta-information required for the HTTP request.

Process

The development process consisted of three main phases:

Stage 1: Planning

During the first stage, our team studied the necessary documentation and organized workflow. After doing that, it became possible to move directly to the development phase.

Stage 2: The creation of web-service

First of all, to create a full-fledged web service, our team discovered the capabilities of Apache Camel Frameworks. And after, we successfully built SOAP web services with Apache CXF.

Stage 3: Data processing algorithm development

During this phase, the team developed processing algorithms that convert data to the required format. After successful processing, the data is sent to the next step of the workflow – customer’s internal products and systems.

Solution

Azati developed a standalone application powered by Apache Camel Framework that acts as middleware and a data bus between two enterprise systems. The solution receives data from the SAP ERP system, converts it from XML to JSON, adds extra meta-information and sends results as the HTTP request to the document generation API.

This is how data is transferred and processed between main components:

  1. SAP ERP system sends a request (message) to the data bus
  2. Data bus forwards the received message to the document generation system
  3. The request is processed in the Document generation system
  4. According to the processing result, the system sends a message with the processing result back to the data bus
  5. Data bus forwards the received message to SAP ERP
 Recommendation system for banking industry

Recommendation system for banking industry

  •  Recommendation system for banking industry screenshot 1
Not Disclosed
Not Disclosed
Banking

Customer

Initially, recommendation systems were used primarily to attract external clients and increase the company’s profits. But now it’s actively used for the internal client — employees. For example, training course recommendations for staff development or even performance analysis.

A banking employee faces a huge number of different indicators: sales growth, current ratio, transfers, and much more, depending on the particular department specifics. Thus, it is sometimes difficult for a manager to track down weaknesses and help staff enhance the result.

The customer turned to Azati with the idea of automating the analysis of incoming data from indicators and metrics and, based on this, forming proper recommendations that will help employees upgrade their workflow.

Objective

Recommendation systems are a great example of a successful IT tool for understanding personal achievement based on the available initial information.

Azati’s main goal on the project was to develop a server API to calculate metrics and build recommendations for banking staff, setting it up and creating a DAG Airflow.

Briefly about Airflow: it is a library for developing, scheduling and monitoring workflows. Data processing, or pipelines, are described using DAG (Directed Acyclic Graph). This is a semantic association of tasks that must be completed in a strictly defined sequence according to a specified schedule.

Process

To build a cutting-edge recommendation system, it was necessary to go through 5 main stages.

1. Understand the Business

An extremely obvious and important first step is defining the primary goals and key parameters. The step certainly includes discussions and data entry between both the data team and business groups (product managers, operations teams, even affiliate or advertising teams, depending on your product).

Here, we answer paramount questions:

  • What is the end goal of the product?
  • What metrics should be used to build recommendations?
  • How to visualize recommendations to users?
  • Do all metrics have the same priority?

2. Data collection

The best recommendation systems use terabytes of data. So when it comes to data collection — the more the better.

Here it is important to understand what kind of incoming data we have and how related recommendations can be calculated based on them.

3. Explore, Clean, and Augment the Data

Consider only looking at features that are more likely to represent the employee current results and removing older data that might no longer be relevant or adding a weight factor to give more importance to recent actions compared to older ones.

4. Rank and recommend

Further, the metrics are read, scores are calculated and recommendations are made according to certain indicators.

5. Visualize the Data

Here we need to clearly provide the end result to the user, in a form that can easily and accurately reflect the main problem areas, as well as suggest ways to improve.

Solution

Our team developed from scratch a server-side API for calculating metrics and making recommendations to bank employees.

To create such a system, the Azati team used Airflow.

We can describe Airflow as a platform for executing and monitoring workflows. We can define a workflow as any sequence of steps you take to achieve a specific goal. Airflow using Directed Acyclic Graphs (DAGs), which comprise of tasks to be performed along with their associated dependencies, which is quite suitable for performance estimation.

This robust and scalable workflow planning platform has four main components:

  • Scheduler: Scheduler keeps track of all DAGs and their associated tasks;
  • Web Server: The Web Server is the user interface of Airflow. It shows the tasks status and allows the user to configure DAGs;
  • Database: The state of the DAGs and their associated tasks is stored in the database so that the schedule remembers the metadata information;
  • Performer: The performer defines how the work will be done. There are different types of executors that can be used for different use cases.
Revolutionizing Banking: Automated Promotions Management

Revolutionizing Banking: Automated Promotions Management

  • Revolutionizing Banking: Automated Promotions Management screenshot 1
Not Disclosed
Not Disclosed
Banking

Azati’s team helped the customer improve their existing system and automate routine manual tasks to optimize the efficiency of the application.

Customer

Financial innovation has been a hallmark of the financial sector for decades, taking the form of products (such as new types of securities), new technologies (such as ATMs), and new institutions (such as venture capital funds). The current wave of financial innovation is driven by the development of a number of technologies, including smartphones, the Internet, data exchange technologies between information systems (API) and distributed ledger technology (DLT), artificial intelligence, and big data.

These new technologies are influencing how banks produce and deliver financial services to their customers, and are also driving the involvement of fintechs and big tech companies in the production and delivery of these services. This impacts traditional financial institutions and may also create new sources of systemic risk, posing new challenges for regulators.

Our customer is involved in the field of finance, in the banking sector.

Objective

We aimed to enhance the promotions management module by seamlessly integrating it with legacy systems for optimal compatibility and efficient data exchange. Furthermore, our focus was on automating manual processes, enhancing overall efficiency, and ensuring precise control over marketing campaign management.

Process

  1. Introduction:
    The project begins after successful completion of the interview. Management and development were organized to high standards.
  2. Creating a Business Problem:
    Initially, the problem was formulated by business representatives, after which a discussion took place between the analyst and the architect.
  3. Description in Confluence:
    The problem statement was eloquently outlined through textual descriptions and tables within Confluence, creating a well-structured and easily accessible source of information.
  4. Assessment and Planning:
    The task was evaluated during the refinement stage, then passed on to the development phase, where comprehensive planning ensued.
  5. Implementation and Quality Control:
    The developer implemented the logic, after which they went through the stages of quality control – test coverage, quality gate from SonarQube, and reviews from senior developers.
  6. Testing and Refinement of Bugs:
    The task was transferred to testing, where identified bugs formed the foundation for generating distinct tasks. Within the scope of these tasks, improvements to functionality were implemented.

Solution

In the project’s final version, we implemented a standalone microservice. This microservice was designed to handle specific tasks independently, operating with its logic and functionality.

Embedded within the overall system architecture, this approach ensured flexibility, scalability, and efficiency in managing various business tasks.

Results

The project is still in progress, and while the work is not yet complete, the overarching goal is clear: it is planned to reduce the time for payments to clients by up to 10 times. Ongoing efforts are focused on refining and enhancing various aspects to achieve this significant improvement.

UI/UX Design for a Mobile Auto Parts Market

UI/UX Design for a Mobile Auto Parts Market

  • UI/UX Design for a Mobile Auto Parts Market screenshot 1
Not Disclosed
4 weeks
Retail

The Azati team helped the customer create a user friendly application design that serves to replace the offline market for selling and buying auto parts in a more convenient and fast way.

OBJECTIVE
Spare parts are usually searched by specific criteria. And it is not possible to accurately combine the search structure of such products with the classic catalog of multi-category marketplaces on one site. This is one of the key technical issues at the moment.

The customer wanted to create an app instead of an offline market in Dubai and save people from grueling trips to physical stores.

Azati’s task was to implement business logic, form an application map and create a convenient, modern and intuitive interface for this online custom marketplace.

SOLUTION
The business logic of the application and the UX part were developed with minimal UI.

The design was developed for two types of users:

1 – BUYER: places an ad for the purchase of the spare part he needs (for this, he sets up a filter, on the basis of which the text of the purchase offer is generated automatically)

1 – SELLER: adjusts the filter according to what he sells and receives offers from buyers according to his settings

Then the seller responds to the ad from the buyer: sets his own price, adds photos of the product and sends an offer to the buyer.

The buyer sees the response and rejects or accepts it. If he accepts, it means that he is placing an order.

The seller sees that the buyer placed the order, confirms it and sends the product by mail or courier (depending on the option the buyer has chosen).

RESULTS
Ultimately, our solution was a complete design for a mobile application instead of an offline market in Dubai for buying and selling auto parts on the fly.

The Azati team have designed a full application structure and logic:

  • Homepage;
  • Sign-up/Sign-in pages for buyers and sellers;
  • Filters settings for buyers to receive offers;
  • Filter settings for sellers to send offers;
  • Active orders and orders history pages;
  • Checkout page.