Commit 46b4b00a authored by lindawangg's avatar lindawangg

new covidnet-cxr model

parent 29d6267c
......@@ -17,5 +17,6 @@ model.py
archive/
requirements.txt
test_dups.py
test_COVIDx2.txt
train_COVIDx2.txt
test_COVIDx.txt
train_COVIDx.txt
create_COVIDx_v2.ipynb
......@@ -2,8 +2,10 @@
**Note: The COVID-Net models provided here are intended to be used as reference models that can be built upon and enhanced as new data becomes available. They are currently at a research stage and not yet intended as production-ready models (not meant for direct clinicial diagnosis), and we are working continuously to improve them as new data becomes available. Please do not use COVID-Net for self-diagnosis and seek help from your local health authorities.**
**Update 04/14/2020: We released two new models, COVIDNet-CXR Small and COVIDNet-CXR Large, which were trained on a new COVIDx Dataset with both PA and AP X-Rays from Cohen et al, as well as additional COVID-19 X-Ray images from Figure1**
<p align="center">
<img src="assets/covidnet-small-exp.png" alt="photo not available" width="70%" height="70%">
<img src="assets/covidnet-cxr-small-exp.png" alt="photo not available" width="70%" height="70%">
<br>
<em>Example chest radiography images of COVID-19 cases from 2 different patients and their associated critical factors (highlighted in red) as identified by GSInquire.</em>
</p>
......@@ -12,9 +14,9 @@
Vision and Image Processing Research Group, University of Waterloo, Canada\
DarwinAI Corp., Canada
The COVID-19 pandemic continues to have a devastating effect on the health and well-being of the global population. A critical step in the fight against COVID-19 is effective screening of infected patients, with one of the key screening approaches being radiological imaging using chest radiography. It was found in early studies that patients present abnormalities in chest radiography images that are characteristic of those infected with COVID-19. Motivated by this, a number of artificial intelligence (AI) systems based on deep learning have been proposed and results have been shown to be quite promising in terms of accuracy in detecting patients infected with COVID-19 using chest radiography images. However, to the best of the authors' knowledge, these developed AI systems have been closed source and unavailable to the research community for deeper understanding and extension, and unavailable for public access and use. Therefore, in this study we introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest radiography images that is open source and available to the general public. We also describe the chest radiography dataset leveraged to train COVID-Net, which we will refer to as COVIDx and is comprised of 16,756 chest radiography images across 13,645 patient cases from two open access data repositories. Furthermore, we investigate how COVID-Net makes predictions using an explainability method in an attempt to gain deeper insights into critical factors associated with COVID cases, which can aid clinicians in improved screening. **By no means a production-ready solution**, the hope is that the open access COVID-Net, along with the description on constructing the open source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most.
The COVID-19 pandemic continues to have a devastating effect on the health and well-being of the global population. A critical step in the fight against COVID-19 is effective screening of infected patients, with one of the key screening approaches being radiological imaging using chest radiography. It was found in early studies that patients present abnormalities in chest radiography images that are characteristic of those infected with COVID-19. Motivated by this, a number of artificial intelligence (AI) systems based on deep learning have been proposed and results have been shown to be quite promising in terms of accuracy in detecting patients infected with COVID-19 using chest radiography images. However, to the best of the authors' knowledge, these developed AI systems have been closed source and unavailable to the research community for deeper understanding and extension, and unavailable for public access and use. Therefore, in this study we introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest radiography images that is open source and available to the general public. We also describe the chest radiography dataset leveraged to train COVID-Net, which we will refer to as COVIDx and is comprised of 13,800 chest radiography images across 13,725 patient patient cases from three open access data repositories. Furthermore, we investigate how COVID-Net makes predictions using an explainability method in an attempt to gain deeper insights into critical factors associated with COVID cases, which can aid clinicians in improved screening. **By no means a production-ready solution**, the hope is that the open access COVID-Net, along with the description on constructing the open source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most.
For a detailed description of the methodology behind COVID-Net and a full description of the COVIDx dataset, please click [here](https://arxiv.org/pdf/2003.09871.pdf).
For a detailed description of the methodology behind COVID-Net and a full description of the COVIDx dataset, please click [here](assets/COVIDNet_CXR.pdf).
Currently, the COVID-Net team is working on COVID-RiskNet, a deep neural network tailored for COVID-19 risk stratification. Stay tuned as we make it available soon.
......@@ -58,7 +60,6 @@ The main requirements are listed below:
* OpenCV 4.2.0
* Python 3.6
* Numpy
* OpenCV
* Scikit-Learn
* Matplotlib
......@@ -69,39 +70,40 @@ Additional requirements to generate dataset:
* Jupyter
## COVIDx Dataset
**Update: we have released the brand-new COVIDx dataset with 16,756 chest radiography images across 13,645 patient cases.**
**Update 04/14/2020: Released new dataset with 152 COVID-19 train and 31 COVID-19 test samples. There are constantly new xray images being added to covid-chestxray-dataset and Figure1 covid dataset so we included train_COVIDx2.txt and test_COVIDx2.txt, which are the xray images we used for training and testing of CovidNet-CXR models.**
The current COVIDx dataset is constructed by the following open source chest radiography datasets:
* https://github.com/ieee8023/covid-chestxray-dataset
* https://github.com/agchung/Figure1-COVID-chestxray-dataset
* https://www.kaggle.com/c/rsna-pneumonia-detection-challenge (which came from: https://nihcc.app.box.com/v/ChestXray-NIHCC)
We especially thank the Radiological Society of North America and others involved in the RSNA Pneumonia Detection Challenge, and Dr. Joseph Paul Cohen and the team at MILA involved in the COVID-19 image data collection project, for making data available to the global community.
We especially thank the Radiological Society of North America, National Institutes of Health, Figure1, Dr. Joseph Paul Cohen and the team at MILA involved in the COVID-19 image data collection project for making data available to the global community.
### Steps to generate the dataset
1. Download the datasets listed above
* `git clone https://github.com/ieee8023/covid-chestxray-dataset.git`
* `git clone https://github.com/agchung/Figure1-COVID-chestxray-dataset`
* go to this [link](https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data) to download the RSNA pneumonia dataset
2. Create a `data` directory and within the data directory, create a `train` and `test` directory
3. Use [create\_COVIDx\_v2.ipynb](create_COVIDx_v2.ipynb) to combine the two dataset to create COVIDx. Make sure to remember to change the file paths.
3. Use [create\_COVIDx\_v3.ipynb](create_COVIDx_v3.ipynb) to combine the three dataset to create COVIDx. Make sure to remember to change the file paths.
4. We provide the train and test txt files with patientId, image path and label (normal, pneumonia or COVID-19). The description for each file is explained below:
* [train\_COVIDx.txt](train_COVIDx.txt): This file contains the samples used for training.
* [test\_COVIDx.txt](test_COVIDx.txt): This file contains the samples used for testing.
* [train\_COVIDx2.txt](train_COVIDx2.txt): This file contains the samples used for training COVIDNet-CXR.
* [test\_COVIDx2.txt](test_COVIDx2.txt): This file contains the samples used for testing COVIDNet-CXR.
### COVIDx data distribution
Chest radiography images distribution
| Type | Normal | Pneumonia | COVID-19 | Total |
|:-----:|:------:|:---------:|:--------:|:-----:|
| train | 7966 | 8514 | 66 | 16546 |
| test | 100 | 100 | 10 | 210 |
| train | 7966 | 5451 | 152 | 13569 |
| test | 100 | 100 | 31 | 231 |
Patients distribution
| Type | Normal | Pneumonia | COVID-19 | Total |
|:-----:|:------:|:---------:|:--------:|:------:|
| train | 7966 | 5429 | 48 | 13443 |
| test | 100 | 98 | 5 | 203 |
| train | 7966 | 5440 | 107 | 13513 |
| test | 100 | 98 | 14 | 212 |
## Training and Evaluation
The network takes as input an image of shape (N, 224, 224, 3) and outputs the softmax probabilities as (N, 3), where N is the number of batches.
......@@ -133,17 +135,17 @@ TF training script from a pretrained model:
1. Download a model from the [pretrained models section](#pretrained-models)
2. Locate models and xray image to be inferenced
3. To inference, `python inference.py --weightspath models/COVID-Netv2 --metaname model.meta_eval --ckptname model-2069 --imagepath assets/ex-covid.jpeg`
3. To inference, `python inference.py --weightspath models/COVIDNet-CXR-Large --metaname model.meta_eval --ckptname model-8485 --imagepath assets/ex-covid.jpeg`
4. For more options and information, `python inference.py --help`
## Results
These are the final results for COVID-Net Small and COVID-Net Large.
These are the final results for COVIDNet-CXR Small and COVIDNet-CXR Large.
### COVIDNet Small
### COVIDNet-CXR Small
<p align="center">
<img src="assets/cm-covidnet-small.png" alt="photo not available" width="50%" height="50%">
<img src="assets/cm-covidnetcxr-small.png" alt="photo not available" width="50%" height="50%">
<br>
<em>Confusion matrix for COVID-Net on the COVIDx test dataset.</em>
<em>Confusion matrix for COVIDNet-CXR Small on the COVIDx test dataset.</em>
</p>
<div class="tg-wrap" align="center"><table class="tg">
......@@ -156,13 +158,13 @@ These are the final results for COVID-Net Small and COVID-Net Large.
<td class="tg-7btt">COVID-19</td>
</tr>
<tr>
<td class="tg-c3ow">95.0</td>
<td class="tg-c3ow">91.0</td>
<td class="tg-c3ow">80.0</td>
<td class="tg-c3ow">97.0</td>
<td class="tg-c3ow">90.0</td>
<td class="tg-c3ow">87.1</td>
</tr>
</table></div>
<div class="tg-wrap"><table class="tg">
<div class="tg-wrap" align="center"><table class="tg">
<tr>
<th class="tg-7btt" colspan="3">Positive Predictive Value (%)</th>
</tr>
......@@ -172,17 +174,18 @@ These are the final results for COVID-Net Small and COVID-Net Large.
<td class="tg-7btt">COVID-19</td>
</tr>
<tr>
<td class="tg-c3ow">91.3</td>
<td class="tg-c3ow">93.8</td>
<td class="tg-c3ow">88.9</td>
<td class="tg-c3ow">89.8</td>
<td class="tg-c3ow">94.7</td>
<td class="tg-c3ow">96.4</td>
</tr>
</table></div>
### COVID-Net Large
### COVIDNet-CXR Large
<p align="center">
<img src="assets/cm-covidnet-large.png" alt="photo not available" width="50%" height="50%">
<img src="assets/cm-covidnetcxr-large.png" alt="photo not available" width="50%" height="50%">
<br>
<em>Confusion matrix for COVID-Net on the COVIDx test dataset.</em>
<em>Confusion matrix for COVIDNet-CXR Large on the COVIDx test dataset.</em>
</p>
<div class="tg-wrap" align="center"><table class="tg">
......@@ -195,13 +198,13 @@ These are the final results for COVID-Net Small and COVID-Net Large.
<td class="tg-7btt">COVID-19</td>
</tr>
<tr>
<td class="tg-c3ow">94.0</td>
<td class="tg-c3ow">90.0</td>
<td class="tg-c3ow">90.0</td>
<td class="tg-c3ow">99.0</td>
<td class="tg-c3ow">89.0</td>
<td class="tg-c3ow">96.8</td>
</tr>
</table></div>
<div class="tg-wrap"><table class="tg">
<div class="tg-wrap" align="center"><table class="tg">
<tr>
<th class="tg-7btt" colspan="3">Positive Predictive Value (%)</th>
</tr>
......@@ -211,9 +214,9 @@ These are the final results for COVID-Net Small and COVID-Net Large.
<td class="tg-7btt">COVID-19</td>
</tr>
<tr>
<td class="tg-c3ow">90.4</td>
<td class="tg-c3ow">93.8</td>
<td class="tg-c3ow">90.0</td>
<td class="tg-c3ow">91.7</td>
<td class="tg-c3ow">98.9</td>
<td class="tg-c3ow">90.9</td>
</tr>
</table></div>
......@@ -221,5 +224,5 @@ These are the final results for COVID-Net Small and COVID-Net Large.
| Type | COVID-19 Sensitivity | # Params (M) | MACs (G) | Model |
|:-----:|:--------------------:|:------------:|:--------:|:-------------------:|
| ckpt | 80.0 | 116.6 | 2.26 |[COVID-Net Small](https://drive.google.com/file/d/1xrxK9swFVlFI-WAYcccIgm0tt9RgawXD/view?usp=sharing)|
| ckpt | 90.0 | 126.6 | 3.59 |[COVID-Net Large](https://drive.google.com/file/d/1djqWcxzRehtyJV9EQsppj1YdgsP2JRQy/view?usp=sharing)|
| ckpt | 87.1 | 117.4 | 2.26 |[COVIDNet-CXR Small](https://bit.ly/CovidNet-CXR-Small)|
| ckpt | 90.0 | 127.4 | 3.59 |[COVIDNet-CXR Large](https://bit.ly/CovidNet-CXR-Large)|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"import os\n",
"import random \n",
"from shutil import copyfile\n",
"import pydicom as dicom\n",
"import cv2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# set parameters here\n",
"savepath = 'data'\n",
"seed = 0\n",
"np.random.seed(seed) # Reset the seed so all runs are the same.\n",
"random.seed(seed)\n",
"MAXVAL = 255 # Range [0 255]\n",
"\n",
"# path to covid-19 dataset from https://github.com/ieee8023/covid-chestxray-dataset\n",
"cohen_imgpath = '../covid-chestxray-dataset/images' \n",
"cohen_csvpath = '../covid-chestxray-dataset/metadata.csv'\n",
"\n",
"# path to covid-14 dataset from https://github.com/agchung/Figure1-COVID-chestxray-dataset\n",
"fig1_imgpath = '../Figure1-COVID-chestxray-dataset/images'\n",
"fig1_csvpath = '../Figure1-COVID-chestxray-dataset/metadata.csv'\n",
"\n",
"# path to https://www.kaggle.com/c/rsna-pneumonia-detection-challenge\n",
"rsna_datapath = '../rsna-pneumonia-detection-challenge'\n",
"# get all the normal from here\n",
"rsna_csvname = 'stage_2_detailed_class_info.csv' \n",
"# get all the 1s from here since 1 indicate pneumonia\n",
"# found that images that aren't pneunmonia and also not normal are classified as 0s\n",
"rsna_csvname2 = 'stage_2_train_labels.csv' \n",
"rsna_imgpath = 'stage_2_train_images'\n",
"\n",
"# parameters for COVIDx dataset\n",
"train = []\n",
"test = []\n",
"test_count = {'normal': 0, 'pneumonia': 0, 'COVID-19': 0}\n",
"train_count = {'normal': 0, 'pneumonia': 0, 'COVID-19': 0}\n",
"\n",
"mapping = dict()\n",
"mapping['COVID-19'] = 'COVID-19'\n",
"mapping['SARS'] = 'pneumonia'\n",
"mapping['MERS'] = 'pneumonia'\n",
"mapping['Streptococcus'] = 'pneumonia'\n",
"mapping['Klebsiella'] = 'pneumonia'\n",
"mapping['Chlamydophila'] = 'pneumonia'\n",
"mapping['Legionella'] = 'pneumonia'\n",
"mapping['Normal'] = 'normal'\n",
"mapping['Lung Opacity'] = 'pneumonia'\n",
"mapping['1'] = 'pneumonia'\n",
"\n",
"# train/test split\n",
"split = 0.1\n",
"\n",
"# to avoid duplicates\n",
"patient_imgpath = {}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# adapted from https://github.com/mlmed/torchxrayvision/blob/master/torchxrayvision/datasets.py#L814\n",
"cohen_csv = pd.read_csv(cohen_csvpath, nrows=None)\n",
"#idx_pa = csv[\"view\"] == \"PA\" # Keep only the PA view\n",
"views = [\"PA\", \"AP\", \"AP Supine\", \"AP semi erect\", \"AP erect\"]\n",
"cohen_idx_keep = cohen_csv.view.isin(views)\n",
"cohen_csv = cohen_csv[cohen_idx_keep]\n",
"\n",
"fig1_csv = pd.read_csv(fig1_csvpath, encoding='ISO-8859-1', nrows=None)\n",
"#fig1_idx_keep = fig1_csv.view.isin(views)\n",
"#fig1_csv = fig1_csv[fig1_idx_keep]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get non-COVID19 viral, bacteria, and COVID-19 infections from covid-chestxray-dataset\n",
"# stored as patient id, image filename and label\n",
"filename_label = {'normal': [], 'pneumonia': [], 'COVID-19': []}\n",
"count = {'normal': 0, 'pneumonia': 0, 'COVID-19': 0}\n",
"for index, row in cohen_csv.iterrows():\n",
" f = row['finding'].split(',')[0] # take the first finding, for the case of COVID-19, ARDS\n",
" if f in mapping: # \n",
" count[mapping[f]] += 1\n",
" entry = [str(row['patientid']), row['filename'], mapping[f], row['view']]\n",
" filename_label[mapping[f]].append(entry)\n",
" \n",
"for index, row in fig1_csv.iterrows():\n",
" if not str(row['finding']) == 'nan':\n",
" f = row['finding'].split(',')[0] # take the first finding\n",
" if f in mapping: # \n",
" count[mapping[f]] += 1\n",
" if os.path.exists(os.path.join(fig1_imgpath, row['patientid'] + '.jpg')):\n",
" entry = [row['patientid'], row['patientid'] + '.jpg', mapping[f]]\n",
" elif os.path.exists(os.path.join(fig1_imgpath, row['patientid'] + '.png')):\n",
" entry = [row['patientid'], row['patientid'] + '.png', mapping[f]]\n",
" filename_label[mapping[f]].append(entry)\n",
"\n",
"print('Data distribution from covid-chestxray-dataset:')\n",
"print(count)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# add covid-chestxray-dataset into COVIDx dataset\n",
"# since covid-chestxray-dataset doesn't have test dataset\n",
"# split into train/test by patientid\n",
"# for COVIDx:\n",
"# patient 8 is used as non-COVID19 viral test\n",
"# patient 31 is used as bacterial test\n",
"# patients 19, 20, 36, 42, 86 are used as COVID-19 viral test\n",
"\n",
"for key in filename_label.keys():\n",
" arr = np.array(filename_label[key])\n",
" if arr.size == 0:\n",
" continue\n",
" # split by patients\n",
" # num_diff_patients = len(np.unique(arr[:,0]))\n",
" # num_test = max(1, round(split*num_diff_patients))\n",
" # select num_test number of random patients\n",
" if key == 'pneumonia':\n",
" test_patients = ['8', '31']\n",
" elif key == 'COVID-19':\n",
" test_patients = ['19', '20', '36', '42', '86', \n",
" '94', '97', '117', '132', \n",
" '138', '144', '150', '163', '169'] # random.sample(list(arr[:,0]), num_test)\n",
" else: \n",
" test_patients = []\n",
" print('Key: ', key)\n",
" print('Test patients: ', test_patients)\n",
" # go through all the patients\n",
" for patient in arr:\n",
" if patient[0] not in patient_imgpath:\n",
" patient_imgpath[patient[0]] = [patient[1]]\n",
" else:\n",
" if patient[1] not in patient_imgpath[patient[0]]:\n",
" patient_imgpath[patient[0]].append(patient[1])\n",
" else:\n",
" continue # skip since image has already been written\n",
" if patient[0] in test_patients:\n",
" copyfile(os.path.join(cohen_imgpath, patient[1]), os.path.join(savepath, 'test', patient[1]))\n",
" test.append(patient)\n",
" test_count[patient[2]] += 1\n",
" else:\n",
" if 'COVID' in patient[0]:\n",
" copyfile(os.path.join(fig1_imgpath, patient[1]), os.path.join(savepath, 'train', patient[1]))\n",
" else:\n",
" copyfile(os.path.join(cohen_imgpath, patient[1]), os.path.join(savepath, 'train', patient[1]))\n",
" train.append(patient)\n",
" train_count[patient[2]] += 1\n",
"\n",
"print('test count: ', test_count)\n",
"print('train count: ', train_count)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# add normal and rest of pneumonia cases from https://www.kaggle.com/c/rsna-pneumonia-detection-challenge\n",
"csv_normal = pd.read_csv(os.path.join(rsna_datapath, rsna_csvname), nrows=None)\n",
"csv_pneu = pd.read_csv(os.path.join(rsna_datapath, rsna_csvname2), nrows=None)\n",
"patients = {'normal': [], 'pneumonia': []}\n",
"\n",
"for index, row in csv_normal.iterrows():\n",
" if row['class'] == 'Normal':\n",
" patients['normal'].append(row['patientId'])\n",
"\n",
"for index, row in csv_pneu.iterrows():\n",
" if int(row['Target']) == 1:\n",
" patients['pneumonia'].append(row['patientId'])\n",
"\n",
"for key in patients.keys():\n",
" arr = np.array(patients[key])\n",
" if arr.size == 0:\n",
" continue\n",
" # split by patients \n",
" # num_diff_patients = len(np.unique(arr))\n",
" # num_test = max(1, round(split*num_diff_patients))\n",
" test_patients = np.load('rsna_test_patients_{}.npy'.format(key)) # random.sample(list(arr), num_test), download the .npy files from the repo.\n",
" # np.save('rsna_test_patients_{}.npy'.format(key), np.array(test_patients))\n",
" for patient in arr:\n",
" if patient not in patient_imgpath:\n",
" patient_imgpath[patient] = [patient]\n",
" else:\n",
" continue # skip since image has already been written\n",
" \n",
" ds = dicom.dcmread(os.path.join(rsna_datapath, rsna_imgpath, patient + '.dcm'))\n",
" pixel_array_numpy = ds.pixel_array\n",
" imgname = patient + '.png'\n",
" if patient in test_patients:\n",
" cv2.imwrite(os.path.join(savepath, 'test', imgname), pixel_array_numpy)\n",
" test.append([patient, imgname, key])\n",
" test_count[key] += 1\n",
" else:\n",
" cv2.imwrite(os.path.join(savepath, 'train', imgname), pixel_array_numpy)\n",
" train.append([patient, imgname, key])\n",
" train_count[key] += 1\n",
"\n",
"print('test count: ', test_count)\n",
"print('train count: ', train_count)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# final stats\n",
"print('Final stats')\n",
"print('Train count: ', train_count)\n",
"print('Test count: ', test_count)\n",
"print('Total length of train: ', len(train))\n",
"print('Total length of test: ', len(test))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# export to train and test csv\n",
"# format as patientid, filename, label, separated by a space\n",
"train_file = open(\"train_split_v3.txt\",\"a\") \n",
"for sample in train:\n",
" if len(sample) == 4:\n",
" info = str(sample[0]) + ' ' + sample[1] + ' ' + sample[2] + ' ' + sample[3] + '\\n'\n",
" else:\n",
" info = str(sample[0]) + ' ' + sample[1] + ' ' + sample[2] + '\\n'\n",
" train_file.write(info)\n",
"\n",
"train_file.close()\n",
"\n",
"test_file = open(\"test_split_v3.txt\", \"a\")\n",
"for sample in test:\n",
" if len(sample) == 4:\n",
" info = str(sample[0]) + ' ' + sample[1] + ' ' + sample[2] + ' ' + sample[3] + '\\n'\n",
" else:\n",
" info = str(sample[0]) + ' ' + sample[1] + ' ' + sample[2] + '\\n'\n",
" test_file.write(info)\n",
"\n",
"test_file.close()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (covid)",
"language": "python",
"name": "covid"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.10"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
......@@ -98,6 +98,8 @@ class BalanceDataGenerator(keras.utils.Sequence):
folder = 'test'
x = cv2.imread(os.path.join(self.datadir, folder, sample[1]))
h, w, c = x.shape
x = x[int(h/6):, :]
x = cv2.resize(x, self.input_shape)
if self.is_training and hasattr(self, 'augmentation'):
......
......@@ -15,6 +15,8 @@ def eval(sess, graph, testfile, testfolder):
for i in range(len(testfile)):
line = testfile[i].split()
x = cv2.imread(os.path.join('data', testfolder, line[1]))
h, w, c = x.shape
x = x[int(h/6):, :]
x = cv2.resize(x, (224, 224))
x = x.astype('float32') / 255.0
y_test.append(mapping[line[2]])
......
......@@ -4,9 +4,9 @@ import os, argparse
import cv2
parser = argparse.ArgumentParser(description='COVID-Net Inference')
parser.add_argument('--weightspath', default='output', type=str, help='Path to output folder')
parser.add_argument('--weightspath', default='models/COVIDNet-CXR-Large', type=str, help='Path to output folder')
parser.add_argument('--metaname', default='model.meta', type=str, help='Name of ckpt meta file')
parser.add_argument('--ckptname', default='model', type=str, help='Name of model ckpts')
parser.add_argument('--ckptname', default='model-8485', type=str, help='Name of model ckpts')
parser.add_argument('--imagepath', default='assets/ex-covid.jpeg', type=str, help='Full path to image to be inferenced')
args = parser.parse_args()
......@@ -25,6 +25,8 @@ image_tensor = graph.get_tensor_by_name("input_1:0")
pred_tensor = graph.get_tensor_by_name("dense_3/Softmax:0")
x = cv2.imread(args.imagepath)
h, w, c = x.shape
x = x[int(h/6):, :]
x = cv2.resize(x, (224, 224))
x = x.astype('float32') / 255.0
pred = sess.run(pred_tensor, feed_dict={image_tensor: np.expand_dims(x, axis=0)})
......
47c78742-4998-4878-aec4-37b11b1354ac 47c78742-4998-4878-aec4-37b11b1354ac.png normal
8989e25c-a698-48fc-b428-fff56931fc8f 8989e25c-a698-48fc-b428-fff56931fc8f.png normal
7fb3786c-5045-4a90-981d-c55b53d4d5d3 7fb3786c-5045-4a90-981d-c55b53d4d5d3.png normal
766b8aea-3b43-4a34-b675-09f373ca066b 766b8aea-3b43-4a34-b675-09f373ca066b.png normal
f6236cb5-cc36-4ec4-895c-d11ce043341d f6236cb5-cc36-4ec4-895c-d11ce043341d.png normal
153b7c2b-4909-4dca-8579-9523582bc4fe 153b7c2b-4909-4dca-8579-9523582bc4fe.png normal
1afd6582-da9d-4e5a-a81f-62a188fe9366 1afd6582-da9d-4e5a-a81f-62a188fe9366.png normal
85bb48fe-e6d6-47de-bb71-d1636aced7dc 85bb48fe-e6d6-47de-bb71-d1636aced7dc.png normal
eaeb935a-7294-4dd3-8bf5-73ba781d28af eaeb935a-7294-4dd3-8bf5-73ba781d28af.png normal
5f137fa7-6539-499e-b0d5-0e481221bf5a 5f137fa7-6539-499e-b0d5-0e481221bf5a.png normal
b5234584-1487-492c-8742-444b9ca41c3d b5234584-1487-492c-8742-444b9ca41c3d.png normal
71d8d6ec-253b-4272-a467-311828b2a35b 71d8d6ec-253b-4272-a467-311828b2a35b.png normal
0103fadb-1663-40a6-8a9e-09d626cd2091 0103fadb-1663-40a6-8a9e-09d626cd2091.png normal
e4cd65ae-65de-44fc-a6b2-ebbc46d2e8d8 e4cd65ae-65de-44fc-a6b2-ebbc46d2e8d8.png normal
2f8fbfdc-56db-4eca-bf64-3c9a0637e28c 2f8fbfdc-56db-4eca-bf64-3c9a0637e28c.png normal
bc46651a-1314-44af-a834-1eb8a36e589e bc46651a-1314-44af-a834-1eb8a36e589e.png normal
71d25920-4060-4a31-b7aa-cfbf4721a5c5 71d25920-4060-4a31-b7aa-cfbf4721a5c5.png normal
ca3c90e4-f7fe-4f6e-b20b-81473deab4f0 ca3c90e4-f7fe-4f6e-b20b-81473deab4f0.png normal
dab2f334-331c-42c7-af09-a997092464b0 dab2f334-331c-42c7-af09-a997092464b0.png normal
436dce2a-06c3-4281-bb8e-840497a49381 436dce2a-06c3-4281-bb8e-840497a49381.png normal
af6ef3d9-81c8-434e-bc5a-dbf89bc418aa af6ef3d9-81c8-434e-bc5a-dbf89bc418aa.png normal
a7aef71b-0fc8-4837-be79-4ced56e03439 a7aef71b-0fc8-4837-be79-4ced56e03439.png normal
dfa57c1f-01fb-4417-a1b5-0641a9b4bb84 dfa57c1f-01fb-4417-a1b5-0641a9b4bb84.png normal
5eb932e2-3455-40fe-93db-ae44f897d9e0 5eb932e2-3455-40fe-93db-ae44f897d9e0.png normal
e8485b0f-293d-4ca8-8b71-92db61c8ee3e e8485b0f-293d-4ca8-8b71-92db61c8ee3e.png normal
35cda03a-0898-4f8b-92d3-6f263aed23ff 35cda03a-0898-4f8b-92d3-6f263aed23ff.png normal
d82e5841-2f43-4eab-ac17-e98a2d90c51b d82e5841-2f43-4eab-ac17-e98a2d90c51b.png normal
af1d168c-4850-41e1-85b8-ff8fc645e671 af1d168c-4850-41e1-85b8-ff8fc645e671.png normal
585f5059-8938-4885-accb-d7d0b1097c60 585f5059-8938-4885-accb-d7d0b1097c60.png normal
5d4c7318-4739-4470-89f3-70bbbca95f10 5d4c7318-4739-4470-89f3-70bbbca95f10.png normal
d3ad2915-af30-426c-ad2d-1634df8c1b5f d3ad2915-af30-426c-ad2d-1634df8c1b5f.png normal
dee054ff-0e1a-4167-b814-cbf339cf689c dee054ff-0e1a-4167-b814-cbf339cf689c.png normal
bb068d57-86a8-4347-bee0-e29d59ddef6b bb068d57-86a8-4347-bee0-e29d59ddef6b.png normal
354c3756-43ed-4921-adf8-60be49a8b7e8 354c3756-43ed-4921-adf8-60be49a8b7e8.png normal
b8dd7d32-b177-4d1e-982f-6f0a743828fa b8dd7d32-b177-4d1e-982f-6f0a743828fa.png normal
a17f7e03-42a6-41d3-82ad-d8a656826936 a17f7e03-42a6-41d3-82ad-d8a656826936.png normal
c3c78e4e-1a31-4a92-be4b-0fc3c7f992a4 c3c78e4e-1a31-4a92-be4b-0fc3c7f992a4.png normal
f1c8caa2-a6dc-40c0-ab85-4769688f1eec f1c8caa2-a6dc-40c0-ab85-4769688f1eec.png normal
d535a3c8-c4a4-4856-b5cd-17f6332eac8b d535a3c8-c4a4-4856-b5cd-17f6332eac8b.png normal
2ed06be4-9f0e-4256-b8bf-a4b396cc6364 2ed06be4-9f0e-4256-b8bf-a4b396cc6364.png normal
168f4fcb-a87d-49fc-b167-4ed42fd1ec43 168f4fcb-a87d-49fc-b167-4ed42fd1ec43.png normal
a8e3ea7d-0b63-45cf-85d8-1078d3e2449d a8e3ea7d-0b63-45cf-85d8-1078d3e2449d.png normal
f31d1fb1-604b-4fec-b320-f07ce2694008 f31d1fb1-604b-4fec-b320-f07ce2694008.png normal
069cfd47-0169-43e7-89a1-0be0fa24105b 069cfd47-0169-43e7-89a1-0be0fa24105b.png normal
85d84453-ab2d-4666-98e6-5df3cbb81f5b 85d84453-ab2d-4666-98e6-5df3cbb81f5b.png normal
8db71746-e837-43c2-bdf1-c44d1ab207e3 8db71746-e837-43c2-bdf1-c44d1ab207e3.png normal
275f9fff-2439-4f3f-9c48-7c21f348701a 275f9fff-2439-4f3f-9c48-7c21f348701a.png normal
1d1968e2-88a5-4d72-9237-caf4cbad9423 1d1968e2-88a5-4d72-9237-caf4cbad9423.png normal
07aeb82e-773b-4498-95c0-fabdf4985bb2 07aeb82e-773b-4498-95c0-fabdf4985bb2.png normal
652d6bb3-56a0-4a89-9631-684ce27cfc4c 652d6bb3-56a0-4a89-9631-684ce27cfc4c.png normal
e155c13d-7ac7-4765-bb86-899349978413 e155c13d-7ac7-4765-bb86-899349978413.png normal
98bfaf7a-d80b-491e-a7e0-c5c7316bebe9 98bfaf7a-d80b-491e-a7e0-c5c7316bebe9.png normal
76b180ab-242f-45d3-bce1-68df00c5ef45 76b180ab-242f-45d3-bce1-68df00c5ef45.png normal
372b2fea-2013-4600-a06f-765c3f054a82 372b2fea-2013-4600-a06f-765c3f054a82.png normal
a52aa522-a1b3-4f15-8609-f44bf96da2e3 a52aa522-a1b3-4f15-8609-f44bf96da2e3.png normal
6556ce72-1a60-40aa-aaf7-b50dbf07fae7 6556ce72-1a60-40aa-aaf7-b50dbf07fae7.png normal
633d9182-1809-41b7-a59b-1e60ac91f0c9 633d9182-1809-41b7-a59b-1e60ac91f0c9.png normal
fd7bad9a-1bff-49ec-9c6c-c9aae9e65726 fd7bad9a-1bff-49ec-9c6c-c9aae9e65726.png normal
99432aa3-8d61-4ff2-a79a-f0a0218d6fa2 99432aa3-8d61-4ff2-a79a-f0a0218d6fa2.png normal
34bf2fcd-131a-428c-9a21-cd2fa9041f9b 34bf2fcd-131a-428c-9a21-cd2fa9041f9b.png normal
b5007662-ff9c-49ad-9093-85ed7dc44bf4 b5007662-ff9c-49ad-9093-85ed7dc44bf4.png normal
06b2f933-3ea2-4477-ac27-18f732d1f4e1 06b2f933-3ea2-4477-ac27-18f732d1f4e1.png normal
c2b24ebd-2c40-48c3-ba39-177224dd7db0 c2b24ebd-2c40-48c3-ba39-177224dd7db0.png normal
63192a6c-02ba-48a5-932a-bb82aeacb1bc 63192a6c-02ba-48a5-932a-bb82aeacb1bc.png normal
73dca3d1-5c58-4c72-80a9-201deca7ffec 73dca3d1-5c58-4c72-80a9-201deca7ffec.png normal
c485a9bd-6e18-4328-8c5c-cf80f71aa35d c485a9bd-6e18-4328-8c5c-cf80f71aa35d.png normal
b540ba89-72f7-40f5-a916-376546c20014 b540ba89-72f7-40f5-a916-376546c20014.png normal
33e4e43b-054b-4537-9cd6-0fe574f7d337 33e4e43b-054b-4537-9cd6-0fe574f7d337.png normal
45dd0d26-0740-4c32-bd01-34246a41f3e3 45dd0d26-0740-4c32-bd01-34246a41f3e3.png normal
ad446933-fb8e-4739-bb40-2063e796ffd8 ad446933-fb8e-4739-bb40-2063e796ffd8.png normal
5b12801f-aaf7-442e-b994-ec8f15ce78e5 5b12801f-aaf7-442e-b994-ec8f15ce78e5.png normal
3fe1550e-8bcb-4732-a343-725acce70531 3fe1550e-8bcb-4732-a343-725acce70531.png normal
0bb24183-8b59-48f1-8bbf-4d889976fc82 0bb24183-8b59-48f1-8bbf-4d889976fc82.png normal
02e4191e-fb03-4581-914c-f0438a17e53e 02e4191e-fb03-4581-914c-f0438a17e53e.png normal
a8dfd068-69e9-4486-bb22-2a04f2f2e1a5 a8dfd068-69e9-4486-bb22-2a04f2f2e1a5.png normal
63f71157-7db6-476f-b320-ba2fcbe2543f 63f71157-7db6-476f-b320-ba2fcbe2543f.png normal
1cdba4ee-0bb8-421d-9a24-7febeb399729 1cdba4ee-0bb8-421d-9a24-7febeb399729.png normal
e1d23cbe-213d-48d6-a8c2-672c4e68285d e1d23cbe-213d-48d6-a8c2-672c4e68285d.png normal
4b1cab8a-c9bd-40e6-bc86-23c6be98a099 4b1cab8a-c9bd-40e6-bc86-23c6be98a099.png normal
ec0492da-e6be-49cf-8b18-dc143180f1a2 ec0492da-e6be-49cf-8b18-dc143180f1a2.png normal
89dd8f63-8320-48f3-b142-d903f40d5c8c 89dd8f63-8320-48f3-b142-d903f40d5c8c.png normal
b76728e6-f44b-40bf-8688-e703d10d2039 b76728e6-f44b-40bf-8688-e703d10d2039.png normal
d9884def-b31a-4861-9eea-27f97bffeba2 d9884def-b31a-4861-9eea-27f97bffeba2.png normal
ffba6230-71cf-4287-a0d1-887f5d16e95d ffba6230-71cf-4287-a0d1-887f5d16e95d.png normal
f3b015ab-e337-4e7f-971d-eb7cc3dd4e92 f3b015ab-e337-4e7f-971d-eb7cc3dd4e92.png normal
bca4aa4d-0cc9-4cc0-b06d-5b675194bf62 bca4aa4d-0cc9-4cc0-b06d-5b675194bf62.png normal
9e2ddac8-9a4c-448b-98c8-a840a548d0f7 9e2ddac8-9a4c-448b-98c8-a840a548d0f7.png normal
fc958b63-5da8-448d-8962-3c5167dc9410 fc958b63-5da8-448d-8962-3c5167dc9410.png normal
cb7d021b-b273-436a-bd7a-e68c11ed3f6b cb7d021b-b273-436a-bd7a-e68c11ed3f6b.png normal
ff332704-48e0-445b-9188-b2a696d1f0d7 ff332704-48e0-445b-9188-b2a696d1f0d7.png normal
3e1b619a-cdd9-495a-bcbf-a9d62b418991 3e1b619a-cdd9-495a-bcbf-a9d62b418991.png normal
0977f16c-c343-42c9-95ed-d7ca996feb16 0977f16c-c343-42c9-95ed-d7ca996feb16.png normal
f8a11cd6-e541-4765-a41b-3d70b6f3481e f8a11cd6-e541-4765-a41b-3d70b6f3481e.png normal
960131aa-4ce4-4b26-aa12-5f73d8d81453 960131aa-4ce4-4b26-aa12-5f73d8d81453.png normal
3a5327d8-8830-4ae2-bd6b-293f5aa42d4b 3a5327d8-8830-4ae2-bd6b-293f5aa42d4b.png normal