Big Data Analytics
Use the whole power of the up-to-date technology of Big Data along with the help of visual tools for integrating and analysing the data of an enterprise. Neoflex offers the following data integration services:
- Batch processing of extremely large data from heterogeneous sources (ETL).
- Downloading from stream sources generating data with high intensity (Streaming).
- Calculations applying machine learning algorithms (Classification, clustering, forecasting based on decision trees and other numerical techniques) in case of volume of data.
- Data Lake creation.
- Agricultural industry. Processing of data from geolocation sensors and weighing stations to detect anomalies and prevent losses of the harvest and at the situation centre.
- Logistics. Data processing on geolocation and the actual progress of loading and unloading operations to adjust the plan online.
- Finances. Calculation of the optimum volume of cash for an ATM network with the help of machine learning algorithms. Preparation of management and compulsory reports in case of large amount of data.
- Telecommunications. Data Lake creation with data on subscribers and used services for further analysis and as well as pre-sales.
Possession of own visual development tools based on the following technologies:
Apache Hadoop is a suite of utilities, libraries and the framework for the development and execution of distributed programmes which run on clusters consisting of hundreds and thousands of nodes.
Apache Spark is a programming framework for distributed processing of unstructured and semi-structured data.
Apache Oozie is a system for managing, coordinating and planning Hadoop and Spark tasks.
Spark Jobserver is a system for starting and managing Spark tasks.
The Model Driven Architecture forms the basis.