Financial Institutions are considered as the most sacred institutions in terms of their well-defined processes and advanced statistical tools for analyzing the financial capacity ranging from individual to multimillion dollar companies. But, all these tools require manual intervention or human element to deduce the final outcome. With recent advancements in IT and rapid shift from paper intensive to paper less world, huge amount of data is getting generated pertaining to the Customer. This data can be external data like demographics, buying, selling, transactions etc. or it can be internal like social network. Most of the financial institutions are using Machine Learning and Data Science for Risk Assessment but there are very few who make use of these tools for recovery or collection. Automated processing tools combined with Data Science and Machine Learning can help financial institutions identify the defaulters or NPA’s before the situation arises by processing and analyzing all the data related to Customer.
(Source - https://analyticsindiamag.com/how-data-science-is-helping-address-the-npa-problem/ )
pbdR is a complete suite of software packages which eases the use of R programming for fast computing. Recently Oak Ridge National Laboratory scientists released its first version pbdR 1.0. The primary objective of the pbdR is to analyze large data sets on high performing processing systems like ORNL Titan and Summit. Apart from various other features, pbdR provides easy installation and use, computing power and multi-platform processing capability. pbdR harnesses the power of distributed data framework by breaking down large data sets into small chunks. These chunks are then analyzed by multiple processors using MPI, which is the standard for message passing in parallel computing. In parallel computing a network of independent processor nodes communicate with each other to analyze the data, which in turn provides high speed processing for large volume of data. pbdR is a modular software, and each module is independent of the other module, i.e. it’s upon the user which module is required for analysis of their data.
(Source- https://www.hpcwire.com/off-the-wire/ornl-data-scientists-release-pbdr-1-0/ )
If you would like to know more about data science training, click on the Request Info button on top of this page
We have seen how big data has transformed everything over the last decade not only in terms of IT, but also in terms of operations and jobs. World Economic Forum predicts that there would be a net loss of 5 million jobs by 2020. However, data jobs will witness significant rise. A forecast estimates 12% growth in technology related jobs by 2024. Well, if forecasts are true, then it is the right time to revisit future goals. Forbes has listed top 6 job titles which are in demand in 2018 for tech and data jobs. Data Scientist job role tops the list of most in-demand tech jobs of 2018 with a median base salary of $100,000.
(Source - https://www.forbes.com/sites/bernardmarr/2018/05/09/the-6-top-data-jobs-in-2018/2/#72d06d4d6650)
Spoiler alert - HBO’s fantasy series based on George R.R. Martin novel “The Song of Ice and Fire” Game of Thrones is the most viewed TV series of all time. No doubt that any news about the much-awaited Season 8 coming in April 2019, will make its fans go crazy. There are tons of blog and videos on the internet GOT fans trying to decipher the plot and who will win the final battle, but what’s crazier is the use of data science by Taylor Larking to predict the likelihood of the deaths of characters in GOT season 8. For predicting this, Larkin had gathered info on thousands of characters from a fan made wiki, which includes age, house, relatives already killed, status. All this were fed to DataRobot automated machine learning which concluded on the probability of deaths in GOT final season.
(Source - https://www.fastcompany.com/40570808/heres-who-dies-in-game-of-thrones-s8-according-to-data-science)
With an ongoing boom for AI, there are plethora of startups providing services around AI and machine learning. LightTag founded by former NLP researcher at Citi, which is a text annotation platform, for assisting data scientists to quickly build training data for deep learning and AI projects. Tal Perry founder of LightTag has quoted “What I’ve taken from [my previous positions] to LightTag is an understanding that labeled data is more important to success in machine learning than clever algorithms”. The success of an AI project depends on how well the input data has been labelled and this depends on the team on labelers, which is where the inaccuracy can arise. LightTag through its clever UI, team-based workflow, AI and quality check is an effort to overcome this situation.(Source - https://techcrunch.com/2018/05/11/lighttag/)
To stay ahead in the race of Machine Learning and Data Science, many big companies are acquiring small niche players in the technology that have innovative products to offer. Recently Oracle has acquired DataScience.com, which analyzes huge volumes of data and creates models and algorithms. According to reports, it will provide a single platform for SaaS and PaaS offerings to the customers harnessing power of Oracle Cloud Infrastructure. (Source -https://www.marketwatch.com/story/oracle-buys-datasciencecom-to-boost-big-data-analytics-offerings-2018-05-16)