Thứ Tư, ngày 11 tháng 5 năm 2016

Sparklines Chart

  1. http://jsfiddle.net/gh/get/jquery/1.9.1/highslide-software/highcharts.com/tree/master/samples/highcharts/demo/sparkline/
  2. http://www.anychart.com/products/anychart/gallery/Sparklines/

Sparkline charts

JSparklines

JSparklines makes it straightforward to visualize numbers in Java tables by the use of sparklines. All that is needed is a couple of lines of code.
The charts are created using JFreeChart and added to the table columns using custom TableCellRenderers.
Supports more than 27 types of charts/renderers, including bar chart, line charts, stacked bar charts, bar charts with error bars, pie charts, scatter plots, interval charts, area charts, heat maps and box plots.

JSparklines Publication:


downloadv1.0.8 - All platformsReleaseNotesJavaDoc

How to use JSparklines

See the How to use JSparklines wiki page for code examples. JSparklines is also available as a Maven dependency.
If you have questions or would like to contribute to the JSparklines project, please contact the developers.

Examples

(Click on a figure to see the full size version)

Projects using JSparklines

PeptideShakerinterpretation of proteomics identification resultsVaudel et al: Nature Biotechnol. 2015 Jan;33(1):22-24.
DeNovoGUIde novo sequencing of tandem mass spectraMuth at al: J Proteome Res. 2014 Feb 7;13(2):1143-6.
SearchGUIgraphical user interface for proteomics identification search enginesVaudel et al: Proteomics 2011;11(5):996-9.
thermo-msf-parserparser and viewer for thermo msf filesColaert et al: J Proteome Res. 2011;10(8):3840-3.
Fragmentation Analyzeranalyzing MS/MS fragmentation dataBarsnes et al: Proteomics 2010;10(5):1087-90.
MetaProteomeAnalyzeranalyzing meta-proteomics dataMuth et al: J Proteome Res. 2015 Mar 6;14(3):1557-65.
proteocloudproteomics cloud computing pipelineMuth et al: J Proteomics. 2013 Jan 8. pii: S1874-3919(13)00013-4.
Are you using JSparklines and would like your project listed here? Contact the developers of JSparklines.

Thứ Ba, ngày 10 tháng 5 năm 2016

Database vs Data Warehouse: A Comparative Review

Ref: http://revistaie.ase.ro/content/43/11-velicanu.pdf


A question I often hear out in the field is: I already have a database, so why do I need a data warehouse for healthcare analytics? What is the difference between a database vs. a data warehouse? These questions are fair ones.
For years, I’ve worked with databases in healthcare and in other industries, so I’m very familiar with the technical ins and outs of this topic. In this post, I’ll do my best to introduce these technical concepts in a way that everyone can understand.
But, before we discuss the difference, could I ask one big favor? This will only take 10 seconds. Could you click below and take a quick poll? I’d like to find out if your organization has a data warehouse, data base(s), or if you don’t know? This would really help me better understand how prevalent data warehouses really are.
Before diving in to the topic, I want to quickly highlight the importance of analytics in healthcare. If you don’t understand the importance of analytics, discussing the distinction between a database and a data warehouse won’t be relevant to you. Here it is in a nutshell. The future of healthcare depends on our ability to use the massive amounts of data now available to drive better quality at a lower cost. If you can’t perform analytics to make sense of your data, you’ll have trouble improving quality and costs, and you won’t succeed in the new healthcare environment.

The High-level Distinction Between Databases and Data Warehouses

What I will refer to as a “database” in this post is one designed to make transactional systems run efficiently. Typically, this type of database is an OLTP (online transaction processing) database. An electronic health record (EHR) system is a great example of an application that runs on an OLTP database. In fact, an OLTP database is typically constrained to a single application.
Database vs. Data WarehouseThe important fact is that a transactional database doesn’t lend itself to analytics. To effectively perform analytics, you need a data warehouse. A data warehouse is a database of a different kind: an OLAP (online analytical processing) database. A data warehouse exists as a layer on top of another database or databases (usually OLTP databases). The data warehouse takes the data from all these databases and creates a layer optimized for and dedicated to analytics.
So the short answer to the question I posed above is this: A database designed to handle transactions isn’t designed to handle analytics. It isn’t structured to do analytics well. A data warehouse, on the other hand, is structured to make analytics fast and easy.
In healthcare today, there has been a lot of money and time spent on transactional systems like EHRs. The industry is now ready to pull the data out of all these systems and use it to drive quality and cost improvements. And that’s where a data warehouse comes into play.

Databases versus Data Warehouses: The Details

Now that you have the overall idea, I want to go into more detail about some of the main distinctions between a database and a data warehouse. Because I’m a visual person (and a database guy who likes rows and columns), I’ll compare and contrast the two in following table format:

Database vs. Data Warehouse

DatabaseData Warehouse
DefinitionAny collection of data organized for storage, accessibility, and retrieval.A type of database that integrates copies of transaction data from disparate source systems and provisions them for analytical use.
TypesThere are different types of databases, but the term usually applies to an OLTP application database, which we’ll focus on throughout this table.Other types of databases include OLAP (used for data warehouses), XML, CSV files, flat text, and even Excel spreadsheets. We’ve actually found that many healthcare organizations use Excel spreadsheets to perform analytics (a solution that is not scalable).A data warehouse is an OLAP database. An OLAP database layers on top of OLTPs or other databases to perform analytics.An important side note about this type of database: Not all OLAPs are created equal. They differ according to how the data is modeled. Most data warehouses employ either an enterprise or dimensional data model, but at Health Catalyst, we advocate a unique, adaptive Late- Binding™ approach. You can learn more about why theLate-Binding™ approach is so important in healthcare analytics in Late-Binding vs. Models: A Comparison of Healthcare Data Warehouse Methodologies.
SimilaritiesBoth OLTP and OLAP systems store and manage data in the form of tables, columns, indexes, keys, views, and data types. Both use SQL to query the data.
How usedTypically constrained to a single application: one application equals one database. An EHR is a prime example of a healthcare application that runs on an OLTP database. OLTP allows for quick real-time transactional processing. It is built for speed and to quickly record one targeted process (ex: patient admission date and time).Accommodates data storage for any number of applications: one data warehouse equals infinite applications and infinite databases.OLAP allows for one source of truth for an organization’s data. This source of truth is used to guide analysis and decision-making within an organization (ex: total patients over age 18 who have been readmitted, by department and by month). Interestingly enough, complex queries like the one just described are much more difficult to handle in an OLTP database.
Service Level Agreement (SLA)OLTP databases must typically meet 99.99% uptime. System failure can result in chaos and lawsuits. The database is directly linked to the front end application.Data is available in real time to serve the here-and-now needs of the organization. In healthcare, this data contributes to clinicians delivering precise, timely bedside care.With OLAP databases, SLAs are more flexible because occasional downtime for data loads is expected. The OLAP database is separated from frontend applications, which allows it to be scalable.Data is refreshed from source systems as needed (typically this refresh occurs every 24 hours). It serves historical trend analysis and business decisions.
OptimizationOptimized for performing read-write operations of single point transactions. An OLTP database should deliver sub-second response times.Performing large analytical queries on such a database is a bad practice, because it impacts the performance of the system for clinicians trying to use it for their day-to-day work. An analytical query could take several minutes to run, locking all clinicians out in the meantime.Optimized for efficiently reading/retrieving large data sets and for aggregating data. Because it works with such large data sets, an OLAP database is heavy on CPU and disk bandwidth.A data warehouse is designed to handle large analytical queries. This eliminates the performance strain that analytics would place on a transactional system.
Data OrganizationAn OLTP database structure features very complex tables and joins because the data is normalized (it is structured in such a way that no data is duplicated). Making data relational in this way is what delivers storage and processing efficiencies—and allows those sub-second response times.In an OLAP database structure, data is organized specifically to facilitate reporting and analysis, not for quick-hitting transactional needs. The data is denormalized to enhance analytical query response times and provide ease of use for business users. Fewer tables and a simpler structure result in easier reporting and analysis.
Reporting/AnalysisBecause of the number of table joins, performing analytical queries is very complex. They usually require the expertise of a developer or database administrator familiar with the application.Reporting is typically limited to more static, siloed needs. You can actually get quite a bit of reporting out of today’s EHRs (which run on an OLTP database), but these reports are static,one-time lists in PDF format. For example, you might generate a monthly report of heart failure readmissions or a list of all patients with a central line inserted. These reports are helpful— particularly for real-time reporting for bedside care—but they don’t allow in-depth analysis.With fewer table joins, analytical queries are much easier to perform. This means that semi-technical users (anyone who can write a basic SQL query) can fill their own needs.The possibilities for reporting and analysis are endless. When it comes to analyzing data, a static list is insufficient. There’s an intrinsic need for aggregating, summarizing, and drilling down into the data. A data warehouse enables you to perform many types of analysis:
  • Descriptive (what has happened)
  • Diagnostic (why it happened)
  • Predictive (what will happen)
  • Prescriptive (what to do about it)
This is the level of analytics required to drive real quality and cost improvement in healthcare.

I hope the information I’ve included here has helped you understand why data warehouses are so important to the future of healthcare. Improving quality and cost requires analytics. And analytics requires a data warehouse.
An OLTP database like that used by EHRs can’t handle the necessary level of analytics. My rule of thumb is this: If you get data into your EHR, you can report on it. If you get it into a data warehouse, you can analyze it.
It’s that simple.

Thứ Ba, ngày 19 tháng 4 năm 2016

Getting Started With ASP.NET 5 On Ubuntu 14.04.2 LTS

Getting Started With ASP.NET 5 On Ubuntu

 16. June 2015 22:59
Ever since the .NET stack went open source last year, there is a huge excitement among the developers about the .NET stuff and developing apps using .NET which are no longer limited to Windows platform. I tried to install ASP.NET VNext on Ubuntu VM in which I terribly failed in the first go. Why? because the tutorial I used was quite old and I messed up the installation of pre-requisites. But I get everything working in the second try. So here are the steps and commands that will get you started with ASP.NET VNext on Ubuntu.
I am setting up a fresh VM for development on Ubuntu 14.04.2 LTS
Installing Mono
First thing is to install Mono. For folks who are new to Linux environment, Mono is a community driven project which allows developers to build and run .NET application on Linux platforms. Here is the set of commands that I have to execute to install Mono.
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update
Install the latest version of Mono available.
sudo apt-get install mono-complete
To check if Mono is successfully installed or to determine the version of Mono on you machine run the below command in the terminal.
mono --version
Installing LibUV
As stated on Github:
Libuv is a multi-platform asynchronous IO library that is used by the KestrelHttpServer that we will use to host our web applications.
Running the below command will install LibUV along with the dependencies require to build it.
sudo apt-get install automake libtool
Getting the source and building and installing it.
curl -sSL https://github.com/libuv/libuv/archive/v1.4.2.tar.gz | sudo tar zxfv - -C /usr/local/src
cd /usr/local/src/libuv-1.4.2
sudo sh autogen.sh
sudo ./configure
sudo make 
sudo make install
sudo rm -rf /usr/local/src/libuv-1.4.2 && cd ~/
sudo ldconfig
Here is a note at Githb repo that explains what the above set of commands are doing.
NOTE: make install puts libuv.so.1 in /usr/local/lib, in the above commands ldconfig is used to update ld.so.cache so that dlopen (see man dlopen) can load it. If you are getting libuv some other way or not running make install then you need to ensure that dlopen is capable of loading libuv.so.1
Getting .NET Version Manager (DNVM)
DNVM is a command line tool which allows you to get new build of the DNX (.NET Execution Environment) and allows you to switch between them. To get DNVM running fire the below command in the terminal.
curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
To check if the DNVM is successfully installed on your machine, type DNVM in the terminal. The output should be something like this:
At any point of time if you want to list out the installed DNX runtimes, run the below command
dnvm list
The next step after this, is to upgrade the DNVM so you can use the dnx and dnu commands. Run the following command in the terminal
dnvm upgrade
Once this is done, we are all set to run ASP.NET VNext application on Ubuntu box. Clone the aspnet/Home repository from Github. If you don't have Git installed then install it with this simple command.
sudo apt-get install git
For simplicity, I have created a new directory on Ubuntu desktop named vnext. You can name the directory as you wish. Navigate to this directory in the terminal and clone the aspnet/Home repository.
git clone https://github.com/aspnet/Home.git
After cloning of repository is done, navigate to the 1.0.0-beta4 directory.
You can see three sample applications that you can test. For this tutorial I am going to checkout HelloMvc application. Get inside theHelloMvc directory and then, run the command 
dnu restore
This will take some time to execute. I didn't face this problem but there is a chance that someone will. When you run this command, theproject.json.lock file gets created and the restore of the package will start. In the end when the restore is finalizing, it may say permission is denied. To resolve this error you can change the permission of the folder by running the following command.
sudo chmod -R 755 HelloMvc
You should always change permission to 755 for directories and 644 for files.
After the execution is completed, you can start the server by running the command.
dnx . kestrel
This command will work for both web and mvc application. If you plan to test out the console application then you can run the following command.
dnx . run
The server runs at port 5004. Fire up the browser and type in http://localhost:5004/
Hope this is helpful for the first time users of Linux.

Chủ Nhật, ngày 20 tháng 3 năm 2016

MSSQL - find a SubString in a field of tables AND find which tables have reference to this table

DECLARE
    @search_string  VARCHAR(100),
    @table_name     SYSNAME,
    @table_schema   SYSNAME,
    @column_name    SYSNAME,
    @sql_string     VARCHAR(2000)

SET @search_string = 'CCC Helpdesk'

DECLARE tables_cur CURSOR FOR SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'

OPEN tables_cur

FETCH NEXT FROM tables_cur INTO @table_schema, @table_name

WHILE (@@FETCH_STATUS = 0)
BEGIN
    DECLARE columns_cur CURSOR FOR SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = @table_schema AND TABLE_NAME = @table_name AND COLLATION_NAME IS NOT NULL  -- Only strings have this and they always have it

    OPEN columns_cur

    FETCH NEXT FROM columns_cur INTO @column_name
    WHILE (@@FETCH_STATUS = 0)
    BEGIN
        SET @sql_string = 'IF EXISTS (SELECT * FROM ' + QUOTENAME(@table_schema) + '.' + QUOTENAME(@table_name) + ' WHERE ' + QUOTENAME(@column_name) + ' LIKE ''%' + @search_string + '%'') PRINT ''' + QUOTENAME(@table_schema) + '.' + QUOTENAME(@table_name) + ', ' + QUOTENAME(@column_name) + ''''

        EXECUTE(@sql_string)

        FETCH NEXT FROM columns_cur INTO @column_name
    END

    CLOSE columns_cur

    DEALLOCATE columns_cur

    FETCH NEXT FROM tables_cur INTO @table_schema, @table_name
END

CLOSE tables_cur

DEALLOCATE tables_cur

================================================================================================================================

SELECT
    fk.name 'FK Name',
    tp.name 'Parent table',
    cp.name, cp.column_id,
    tr.name 'Refrenced table',
    cr.name, cr.column_id
FROM
    sys.foreign_keys fk
INNER JOIN
    sys.tables tp ON fk.parent_object_id = tp.object_id
INNER JOIN
    sys.tables tr ON fk.referenced_object_id = tr.object_id
INNER JOIN
    sys.foreign_key_columns fkc ON fkc.constraint_object_id = fk.object_id
INNER JOIN
    sys.columns cp ON fkc.parent_column_id = cp.column_id AND fkc.parent_object_id = cp.object_id
INNER JOIN
    sys.columns cr ON fkc.referenced_column_id = cr.column_id AND fkc.referenced_object_id = cr.object_id
WHERE
tr.name = 'QueueDefinition'
ORDER BY
    tp.name, cp.column_id

Thứ Tư, ngày 17 tháng 2 năm 2016

MySQL convert datetime to Unix timestamp

Try this Query for CONVERT DATETIME to UNIX TIME STAMP
SELECT UNIX_TIMESTAMP(STR_TO_DATE('Apr 15 2012 12:00AM', '%M %d %Y %h:%i%p'))
This Query for CHANGE DATE FORMATE
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(STR_TO_DATE('Apr 15 2012 12:00AM', '%M %d %Y %h:%i%p')),'%m-%d-%Y %h:%i:%p')
SELECT UNIX_TIMESTAMP(STR_TO_DATE('30.1.2016', '%d.%m.%Y'))*1000;
-> '1454086800000'

SELECT FROM_UNIXTIME('1454086800','%d-%m-%Y %h:%i%p');
-> '30-01-2016 12:00AM'

Thứ Ba, ngày 16 tháng 2 năm 2016

Top 10 Best Free Data Recovery Software of 2016

Top 10 Best Free Data Recovery Software of 2016

best-free-data-recovery-tool
ShortBytes: FossBytes brings you a list of the Best data recovery software of 2016 which are totally free. These data recovery tool save a lot of hassles after we accidently delete some important files or we do not take backups before formatting a hard drive. Using these free recovery software, you can recover your data back on your PC.

W
e lose our important data from hard disk by accidentally pressing the Delete key. Sometimes, a software bug or virus can also corrupt your hard disk. In that case, you need the best data recovery software or a recovery tool to recover your important data back at any cost.
At this point of time, a data recovery software come in handy. We have compiled a list of best free data recovery software considering factors such as whether the software can recover RAW, Unallocated, Corrupt or Formatted Hard Disk; its ability to recover from different file systems such as FAT, FAT32, HFS, NTFS etc.; the array of devices supported; time taken for file recovery and user friendliness to name a few. Here is a list of-
TOP 10 BEST DATA RECOVERY SOFTWARE 2016 FOR FREE:
1. Recuva:

The fact that Recuva is on the top of the best data recovery software list, may not come as a surprise to the most of you. Some of the features that bring Recuva on top of the list of Best recovery software software are:
Superior file recovery
Advanced deep scan mode
Secure overwrite feature that uses industry & military standard deletion techniques and,
Ability to recover files from damaged or newly formatted files
Easy User interface
2. TestDisk:

A list of best data recovery software can hardly be termed as being complete without a mention of TestDisk. Packed with features and a file recovery system that can easily overshadow that of any other data recovery software, TestDisk has a lot to offer for both novices and experts. Here are some of the TestDisk’s features:
Allows users to recover/rebuild the boot sector
Fix or recover deleted partition table besides being able to reliably undelete files from FAT, exFAT, NTFS and ext2 file systems.
Available on all major platforms such as Microsoft Windows, Mac OS X etc and is in fact quite popular as it can be found on various Linux Live CD’s.
Although being a command line tool, TestDisk may not be suitable for some users for data recovery.
3. Undelete 360:

With the looks of a typical Office application, theUndelete 360 is built on a fast yet efficient algorithm which enables the user to undelete files. Here are some of the features of Undelete 360:
Works on a variety of devices such as Digital cameras, USB’s etc.
It includes a data-wiping tool, a Hex Viewer along with the ability to preview files before recovery.
Does a great job in recovering recently deleted files as compared to other free data recovery software
Also able to recover files of a wide variety of types such as DOC, HTML, AVI, MP3, JPEG, JPG, PNG, GIF, etc.
However, scanning speed needs major improvement and it also lags out its competition in terms of recovering data.
4. PhotoRec:

Definitely one of the best data recovery software out there, PhotoRec is widely acclaimed for its powerful file recovery over a wide variety of devices ranging from digital cameras to hard-disks. Here are some of the features of PhotoRec recovery tool:
Compatible with almost all major platforms such as Microsoft Windows, Linux, Mac OS X etc.
Comes packed with the ability to recover more than 440 different file formats.
Features such as the ‘unformat function’ and the ability to add your own custom file types do come in handy.
Although I wouldn’t advise this free data recovery software to beginners as it is completely devoid of a GUI and uses a command line interface which may intimidate some users.
5. Pandora Recovery:

Pandora recovery is one of the most reliable and effective best free data recovery software out there. Pandora recovery tool has a lot to offer to its users. Here are some of the features of this tool:
Ability to recover deleted files from NTFS and FAT-formatted volumes
Preview deleted files of certain types (image and text files) without performing recovery
Surface scan ( which allows you to recover data from drives that have been formatted) and the ability to recover archived, hidden, encrypted and compressed files it packs quite a punch.
Its interface is very easy to get a hang of and provides an explorer-like view along with colour coded & recovery percentage indicators.
However, its file detection system is not that reliable and needs to be improved further. The software could be made portable as well so that it doesn’t consume any space on the hard disk and thereby not consume space that a file which we wish to recover once consumed.
6. MiniTool Partition Recovery:
Standard undelete programs like Recuva, Pandora etc. are perfect for recovering a few delted files, but what if when you have lost an entire partition? Then you will probably need a specialist application like MiniTool Partition Recovery. Here are some of the great features of this recovery tool specialized in partition recovery:
An easy wizard-based interface
Specialized in data recovery on an entire partition
Point MiniTool Partition Recovery tool at the problematic drive and it will scan for the missing partition.
Generates a recovery report which will let you know what the program has found to help you in data recovery
Can’t use data recovery on a bootable disc here.
7. Wise Data Recovery:
Wise data recovery tool is one of the fastest undelete tools among the best data recovery softwares. Besides being faster, it also comes with some of nice features. Here is a list of it’s features:
Easy and an intuitive interface
Can recover deleted files from local drives, USB drives, cameras, memory cards, removable media devices etc.
Faster search filter by selecting in-built file extension groups using the file’s type.
Compatible from WIndows XP to WIndows 8
Although the scanning is fast, the program has no deep scan mode which it could mean a slightly reduced chance of recovering the most hard to recover files.
8. Puran file Recovery:
Puran file recovery works in 3 main recovery modes. These recovery modes are:
Default Quick Scan (It simply reads the FAT or NTFS file system for deleted files from the recycle bin etc.)
Deep Scan (includes scanning all available free space) and,
Full Scan (checks all space on the device for the best chance of recovery)
Works from Windows XP to Windows 8
Using the “Find lost files” option turns Puran File Recovery into a tool to recover all files from a lost or damaged partition. Something else you can do is edit the custom scan list which stores file signatures for more accurate recovery of badly damaged data.
9. PC Inspector File recovery
PC Inspector File Recovery Works well on both FAT and NTFS drive even if the boot sector has been erased or damaged. Here are some of the features of this recovery tool.
Simple search dialog to help locating files by name
Recovered files can be restored in a local hard disk or network drives.
Can recover image and video of several types of files in different formats such as ARJ, AVI, BMP, DOC, DXF, XLS, EXE, GIF, HLP, HTML, JPG, LZH, MID, MOV, MP3, PDF, PNG, RTF, TAR, TIF, WAV and ZIP.
Can scan just specific areas of the disc with the Cluster scanner
Works perfectly from Windows XP to Windows 7
However the interface is a little confusing mess of tabs. So, be careful with this tool.
10. Restoration
Restoration data recovery program take the final position in the list of top 10 best data recovery software tool. It is no different and is similar to the other free undelete apps on this list. Even if it is on the tenth position, here are a few things that we liked about this data recovery tool:
Very simple and easy to use
No confusing and no cryptic buttons or any complicated file recovery procedures
It can recover data and files from hard drives, memory cards, USB drives, and other external drives as well.
Does not need to be installed and can run data recovery from a floppy disk or USB drive.
Supports Windows Vista, XP, 2000, NT, ME, 98, and 95 and also, successfully tested Windows 7 and Windows 10.
Sometimes, runs into a problem with Windows 8
Editor’s pick: 
I would personally recommend Piriform’s Recuva to all our readers hands down. With superior file recovery , an advanced deep scan mode, a secure overwrite feature that uses industry & military standard deletion techniques and the ability to recover files from damaged or newly formatted files , Recuva is undeniably one of the best free data recovery tools out there. Its portability (the ability to run without installation) is one feature that sets it apart from the others.
The User Interface wouldn’t let you down either with a file-recovery wizard and an application manual mode available to your disposal which provides colour coding (indicating the probability of the recovery of a file) along with the ability to preview files before undeleting them. Recuva is definitely a notch above all others and undoubtedly the most complete and reliable free data recovery software available today.
Also read: Top 10 Best Free Antivirus Software Of 2016

Have some other data recovery software in mind? Give us your suggestions in the comments below.