Commit 943790de authored by Yipeng Hu's avatar Yipeng Hu

ref #00 clean-up wrong origin merge

parent c722f7dc
This diff is collapsed.
<<<<<<< HEAD
# MLMI
# Machine-Learning-Journal-Club
Discuss at weiss-ucl.slack.com
Machine Learning in Medical Imaging (MPHY0041)
### -------------------------------
#### 28th January 2019, 13:00 - 14:00
#### Topic: Neural Network Architecture
#### G10 Seminar Room 4, Charles Bell House
#### Paper(s):
He, K., Zhang, X., Ren, S. and Sun, J., 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[paper_20200128_01]: https://arxiv.org/pdf/1512.03385
#### Presenter(s):
Dimitris Psychogyios
### -------------------------------
#### 18th February 2019, 13:00 - 14:00
#### Topic: TBC
#### G03 Seminar Room 1, Charles Bell House
#### Paper(s):
tbc
#### Presenter(s):
Francisco Vasconcelos
### -------------------------------
#### Future topics:
#### Attention mechanism
#### Region proposal
#### Semi-supervised learning
#### Adversarial learning
#### Emsemble learning
#### Meta learning
#### Active Learning
#### Bayesian
#### Multiple Instance Learning
#### Multi-task
#### Deeplab
### -------------------------------
# Past:
### -------------------------------
#### 14th January 2019, 13:00 - 14:00
#### Topic: Video Motion Magnification
#### G11 Seminar Room 4, Charles Bell House
#### Paper(s):
Oh, T.H., Jaroensri, R., Kim, C., Elgharib, M., Durand, F.E., Freeman, W.T. and Matusik, W., 2018. Learning-based video motion magnification. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 633-648).
[paper link][paper_20200114_01]
[paper_20200114_01]: https://arxiv.org/pdf/1804.02684.pdf
#### Presenter(s):
Mirek Janatka
### -------------------------------
#### 10th December 2019, 13:00 - 14:00
#### Topic: Recurrent Neural Networks
#### G03 Seminar Room 1, Charles Bell House
#### Paper(s):
Jin Y, Dou Q, Chen H, Yu L, Qin J, Fu CW, Heng PA. SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE transactions on medical imaging. 2017 Dec 27;37(5):1114-2
[paper link][paper_20191210_01]
[paper_20191210_01]: https://ieeexplore.ieee.org/abstract/document/8240734
#### Presenter(s):
Sophia Bano
### -------------------------------
#### 3rd December 2019, 13:00 - 14:00
#### Topic: Depth Estimation
#### G03 Seminar Room 1, Charles Bell House
#### Paper(s):
Lasinger, K., Ranftl, R., Schindler, K. and Koltun, V., 2019. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. arXiv preprint arXiv:1907.01341.
[paper link][paper_20191203_01]
[paper_20191203_01]: https://arxiv.org/pdf/1907.01341.pdf
#### Presenter(s):
Eddie Edwards
### -------------------------------
#### 26th November 2019, 13:00 - 14:00
#### Topic: Spatial Transformation in Deep Neural Networks
#### G03 Seminar Room 1, Charles Bell House
#### Paper(s):
Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in neural information processing systems (pp. 2017-2025).
[paper link][paper_20191126_01]
#### Presenter(s):
Mark Pinnock
[paper_20191126_01]: https://papers.nips.cc/paper/5854-spatial-transformer-networks.pdf
### -------------------------------
#### 19th November 2019, 13:00 - 14:00
#### Topic: Reinforcement Learning
#### G03 Seminar Room 1, Charles Bell House
#### Paper(s):
Kendall, A., Hawke, J., Janz, D., Mazur, P., Reda, D., Allen, J. M., ... & Shah, A. (2019, May). Learning to drive in a day. In 2019 International Conference on Robotics and Automation (ICRA) (pp. 8248-8254). IEEE.
[paper link][paper_20191119_01]
#### Presenter(s):
Bongjin Koo
[paper_20191119_01]: https://arxiv.org/pdf/1807.00412.pdf
### -------------------------------
### --- [cancelled] ---
#### 12th November 2019, 13:00 - 14:00
#### Topic: Reinforcement Learning
#### G03 Seminar Room 1, Charles Bell House,
#### Paper(s):
To be determined
#### Presenter(s):
Bongjin Koo
### -------------------------------
#### 22nd October 2019, 13:00 - 14:00
#### Topic: Model Interpretability
#### G03 Seminar Room 1, Charles Bell House,
#### Paper(s):
Clough, J.R., Oksuz, I., Puyol-Anton, E., Ruijsink, B., King, A.P. and Schnabel, J.A., 2019. Global and Local Interpretability for Cardiac MRI Classification. arXiv preprint arXiv:1906.06188. MICCAI 2019. [paper link][paper_22102019_01]
#### Presenter(s):
Simone Foti
[paper_22102019_01]: https://arxiv.org/pdf/1906.06188.pdf
### -------------------------------
#### 8th October 2019, 13:00 - 14:00
#### Topic: Domain Adaptation
#### G03 Seminar Room 1, Charles Bell House,
#### Paper(s):
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M. and Lempitsky, V., 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1), pp.2096-2030. [paper link][paper_08102019_00]
Dou, Q., Ouyang, C., Chen, C., Chen, H. and Heng, P.A., 2018. Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss. arXiv preprint arXiv:1804.10916. [paper link][paper_08102019_01]
#### Presenter(s):
Yipeng Hu
[paper_08102019_00]: http://jmlr.org/papers/volume17/15-239/15-239.pdf
[paper_08102019_01]: https://www.ijcai.org/proceedings/2018/0096.pdf
% script_prepData
if ispc
homeFolder = getenv('USERPROFILE');
elseif isunix
homeFolder = getenv('HOME');
end
normFolder = fullfile(homeFolder, 'Scratch/data/protocol/normalised');
mkdir(normFolder);
dataFolder = fullfile(homeFolder, 'Scratch/data/protocol/SPE_data_classes');
ClassNames = {'1_skull'; '2_abdomen'; '3_heart'; '4_other'};
ClassFolders = cellfun(@(x)fullfile(dataFolder,x),ClassNames,'UniformOutput',false);
%% go through all raw data and obtain the frame_info
case_ids = {};
frame_info = {};
frame_counters = zeros(length(ClassFolders),1);
idx_frame_1 = 0;
for idx_class_1 = 1:length(ClassFolders)
frame_names = dir(ClassFolders{idx_class_1});
for j = 3:length(frame_names)
fname = frame_names(j).name;
% get rid of the problematic files
try
% debug: fprintf('reading No.%d - [%s]\n',i,filename)
img = imread(fullfile(frame_names(j).folder,fname)); % figure, imshow(img,[])
catch
disp(fullfile(frame_names(j).folder,fname))
continue
end
% dealing with different date format here
date_del = strfind(fname,'-');
if length(date_del)>=6 % case that date_del is repeated
start0=date_del(6)+1;
elseif length(date_del)>=3
start0=date_del(3)+1;
else
start0=1;
end
ext0 = regexpi(fname,'.png');
newstr = strrep(fname(start0:ext0-1),'fr_','');
% additional check here
udls = strfind(newstr,'_');
if length(udls) ~= 2
warning('Incorrect filename format!, %s',fname);
end
id = newstr(1:udls(2)-1);
fr = str2double(newstr(udls(2)+1:end));
[~, idx_case_1] = ismember(id, case_ids);
if idx_case_1==0 % add to exisiting volume
idx_case_1 = length(case_ids)+1;
case_ids{length(case_ids)+1} = id;
end
idx_frame_1 = idx_frame_1+1;
idx_frame = idx_frame_1 - 1;
frame_info(idx_frame_1).filename = fname;
frame_info(idx_frame_1).case_name = id;
frame_info(idx_frame_1).case_idx = idx_case_1 - 1;
frame_info(idx_frame_1).class_name = ClassNames{idx_class_1};
frame_info(idx_frame_1).class_idx = idx_class_1 - 1;
end
end
save(fullfile(normFolder,'frame_info'),'frame_info','dataFolder');
%% now write into files
% specify the folders
load(fullfile(normFolder,'frame_info')); % specify the folders
roi_crop = [47,230,33,288]; % [xmin,xmax,ymin,ymax]
frame_size = [roi_crop(2)-roi_crop(1)+1, roi_crop(4)-roi_crop(3)+1];
indices_class = [frame_info(:).class_idx];
num_classes = length(unique(indices_class));
indices_subject = [frame_info(:).case_idx];
num_subjects = length(unique(indices_subject));
% by subject
MAX_num_frames = 50;
RESIZE_scale = 2;
frame_size = round(frame_size/2);
h5fn_subjects = fullfile(normFolder,'ultrasound_50frames.h5'); delete(h5fn_subjects);
% write global infomation
GroupName = '/frame_size';
h5create(h5fn_subjects,GroupName,size(frame_size),'DataType','uint32');
h5write(h5fn_subjects,GroupName,uint32(frame_size));
GroupName = '/num_classes';
h5create(h5fn_subjects,GroupName,[1,1],'DataType','uint32');
h5write(h5fn_subjects,GroupName,uint32(num_classes));
GroupName = '/num_subjects';
h5create(h5fn_subjects,GroupName,[1,1],'DataType','uint32');
h5write(h5fn_subjects,GroupName,uint32(num_subjects));
for idx_subject = 0:num_subjects-1 % 0-based indexing
indices_frame_1_subject = find(indices_subject==idx_subject);
num_frames_subject = length(indices_frame_1_subject);
if num_frames_subject>MAX_num_frames
indices_frame_1_subject = randsample(indices_frame_1_subject,MAX_num_frames);
num_frames_subject = MAX_num_frames;
end
for idx_frame_subject = 0:num_frames_subject-1
idx_frame = indices_frame_1_subject(idx_frame_subject+1);
filename = fullfile(dataFolder, frame_info(idx_frame).class_name, frame_info(idx_frame).filename);
img = imread(filename);
img = imresize(img(roi_crop(1):roi_crop(2),roi_crop(3):roi_crop(4)),frame_size);
GroupName = sprintf('/subject%06d_frame%08d',idx_subject,idx_frame_subject);
h5create(h5fn_subjects,GroupName,size(img),'DataType','uint8');
h5write(h5fn_subjects,GroupName,img);
GroupName = sprintf('/subject%06d_label%08d',idx_subject,idx_frame_subject);
h5create(h5fn_subjects,GroupName,[1,1],'DataType','uint32');
h5write(h5fn_subjects,GroupName,uint32(indices_class(idx_frame)));
end
GroupName = sprintf('/subject%06d_num_frames',idx_subject);
h5create(h5fn_subjects,GroupName,[1,1],'DataType','uint32');
h5write(h5fn_subjects,GroupName,uint32(num_frames_subject));
end
%% obsolete
% % by subject
% h5fn_subjects = fullfile(normFolder,'protocol_sweep_class_subjects.h5'); delete(h5fn_subjects);
% num_frames_per_subject = zeros(1,num_subjects,'uint32');
% for idx_subject = (1:num_subjects)-1 % 0-based indexing
% frame_subject = 0;
% indices_frame_1_subject = find(indices_subject==idx_subject);
% num_frames_per_subject(idx_subject+1) = length(indices_frame_1_subject);
% for idx_frame_1 = indices_frame_1_subject
% filename = fullfile(dataFolder,frame_info(idx_frame_1).class_name,frame_info(idx_frame_1).filename);
% img = imread(filename);
% img = img(roi_crop(1):roi_crop(2),roi_crop(3):roi_crop(4));
% GroupName = sprintf('/subject%06d_frame%08d',idx_subject,frame_subject);
% frame_subject = frame_subject+1;
% h5create(h5fn_subjects,GroupName,size(img),'DataType','uint8');
% h5write(h5fn_subjects,GroupName,img);
% end
% GroupName = sprintf('/subject%06d_class',idx_subject);
% h5create(h5fn_subjects,GroupName,size(indices_frame_1_subject),'DataType','uint32');
% h5write(h5fn_subjects,GroupName,uint32(indices_class(indices_frame_1_subject)));
% end
% % extra info
% GroupName = '/num_frames_per_subject';
% h5create(h5fn_subjects,GroupName,size(num_frames_per_subject),'DataType','uint32');
% h5write(h5fn_subjects,GroupName,uint32(num_frames_per_subject));
% GroupName = '/frame_size';
% h5create(h5fn_subjects,GroupName,size(frame_size),'DataType','uint32');
% h5write(h5fn_subjects,GroupName,uint32(frame_size));
% GroupName = '/num_classes';
% h5create(h5fn_subjects,GroupName,[1,1],'DataType','uint32');
% h5write(h5fn_subjects,GroupName,uint32(num_classes));
% GroupName = '/num_subjects';
% h5create(h5fn_subjects,GroupName,[1,1],'DataType','uint32');
% h5write(h5fn_subjects,GroupName,uint32(num_subjects));
% % by frames
% h5fn_frames = fullfile(normFolder,'protocol_sweep_class_frames.h5'); delete(h5fn_frames);
% for idx_frame_1 = 1:length(frame_info)
% %% now read in image
% filename = fullfile(dataFolder,frame_info(idx_frame_1).class_name,frame_info(idx_frame_1).filename);
% img = imread(filename);
% img = img(roi_crop(1):roi_crop(2),roi_crop(3):roi_crop(4));
% % figure, imshow(img,[])
% GroupName = sprintf('/frame%08d',idx_frame_1-1);
% h5create(h5fn_frames,GroupName,size(img),'DataType','uint8');
% h5write(h5fn_frames,GroupName,img);
% end
% GroupName = '/class';
% h5create(h5fn_frames,GroupName,size(indices_class),'DataType','uint32');
% h5write(h5fn_frames,GroupName,uint32(indices_class));
% GroupName = '/subject';
% h5create(h5fn_frames,GroupName,size(indices_subject),'DataType','uint32');
% h5write(h5fn_frames,GroupName,uint32(indices_subject));
% % extra info
% GroupName = '/frame_size';
% h5create(h5fn_frames,GroupName,size(frame_size),'DataType','uint32');
% h5write(h5fn_frames,GroupName,uint32(frame_size));
% GroupName = '/num_classes';
% h5create(h5fn_frames,GroupName,[1,1],'DataType','uint32');
% h5write(h5fn_frames,GroupName,uint32(num_classes));
% GroupName = '/num_subjects';
% h5create(h5fn_frames,GroupName,[1,1],'DataType','uint32');
% h5write(h5fn_frames,GroupName,uint32(num_subjects));
import random
import tensorflow as tf
from matplotlib import pyplot as plt
nSbj = 6
nFrm = 8
filename = '../../../datasets/ultrasound_50frames.h5'
# generate 5 random subjects
num_subjects = tf.keras.utils.HDF5Matrix(filename, '/num_subjects').data.value[0][0]
idx_subject = random.sample(range(num_subjects),nSbj)
plt.figure()
for iSbj in range(nSbj):
dataset = '/subject%06d_num_frames' % (idx_subject[iSbj])
num_frames = tf.keras.utils.HDF5Matrix(filename, dataset)[0][0]
idx_frame = random.sample(range(num_frames),nFrm)
for iFrm in range(nFrm):
dataset = '/subject%06d_frame%08d' % (idx_subject[iSbj], idx_frame[iFrm])
frame = tf.transpose(tf.keras.utils.HDF5Matrix(filename, dataset))
dataset = '/subject%06d_label%08d' % (idx_subject[iSbj], idx_frame[iFrm])
label = tf.keras.utils.HDF5Matrix(filename, dataset)[0][0]
axs = plt.subplot(nSbj, nFrm, iSbj*nFrm+iFrm+1)
axs.set_title('S{}, F{}, C{}'.format(idx_subject[iSbj], idx_frame[iFrm], label))
axs.imshow(frame, cmap='gray')
axs.axis('off')
plt.show()
import tensorflow as tf
import random
# import numpy as np
filename = '../../../datasets/ultrasound_50frames.h5'
frame_size = tf.keras.utils.HDF5Matrix(filename, '/frame_size').data.value
frame_size = [frame_size[0][0],frame_size[1][0]]
num_classes = tf.keras.utils.HDF5Matrix(filename, '/num_classes').data.value[0][0]
# place holder for input image frames
features_input = tf.keras.Input(shape=frame_size+[1])
features = tf.keras.layers.Conv2D(32, 7, activation='relu')(features_input)
features = tf.keras.layers.MaxPool2D(3)(features)
features_block_1 = tf.keras.layers.Conv2D(64, 3, activation='relu')(features)
features = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same')(features_block_1)
features = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same')(features)
features_block_2 = features + features_block_1
features = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same')(features_block_2)
features = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same')(features)
features = features + features_block_2
features = tf.keras.layers.MaxPool2D(3)(features)
features_block_3 = tf.keras.layers.Conv2D(128, 3, activation='relu')(features)
features = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(features_block_3)
features = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(features)
features_block_4 = features + features_block_3
features = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(features_block_4)
features = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(features)
features_block_5 = features + features_block_4
features = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(features_block_5)
features = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(features)
features_block_6 = features + features_block_5
features = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(features_block_6)
features = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(features)
features = features + features_block_6
features = tf.keras.layers.Conv2D(128, 3, activation='relu')(features)
features = tf.keras.layers.GlobalAveragePooling2D()(features)
features = tf.keras.layers.Dense(units=256, activation='relu')(features)
features = tf.keras.layers.Dropout(0.5)(features)
logits_output = tf.keras.layers.Dense(units=num_classes, activation='softmax')(features)
# now the model
model = tf.keras.Model(inputs=features_input, outputs=logits_output)
model.summary()
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics=['SparseCategoricalAccuracy'])
# now get the data using a generator
num_subjects = tf.keras.utils.HDF5Matrix(filename, '/num_subjects').data.value[0][0]
subject_indices = range(num_subjects)
num_frames_per_subject = 1
def data_generator():
for iSbj in subject_indices:
dataset = '/subject%06d_num_frames' % iSbj
num_frames = tf.keras.utils.HDF5Matrix(filename, dataset)[0][0]
idx_frame = random.sample(range(num_frames),num_frames_per_subject)[0]
dataset = '/subject%06d_frame%08d' % (iSbj, idx_frame)
frame = tf.transpose(tf.keras.utils.HDF5Matrix(filename, dataset)) / 255
dataset = '/subject%06d_label%08d' % (iSbj, idx_frame)
label = tf.keras.utils.HDF5Matrix(filename, dataset)[0][0]
yield (tf.expand_dims(frame, axis=2), label)
dataset = tf.data.Dataset.from_generator(generator = data_generator,
output_types = (tf.float32, tf.int32),
output_shapes = (frame_size+[1], ()))
# training
dataset_batch = dataset.shuffle(buffer_size=1024).batch(num_subjects)
frame_train, label_train = next(iter(dataset_batch))
model.fit(frame_train, label_train, epochs=int(1e3), validation_split=0.2)
import tensorflow as tf
### build a data pipeline
# https://www.tensorflow.org/guide/data
# https://www.tensorflow.org/guide/data_performance
\ No newline at end of file
# (Provided)
# *** Available as part of UCL MPHY0025 (Information Processing in Medical Imaging) Assessed Coursework 2018-19 ***
# *** This code is with an Apache 2.0 license, University College London ***
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('default')
def dispImage(img, int_lims = [], ax = None):
"""
function to display a grey-scale image that is stored in 'standard
orientation' with y-axis on the 2nd dimension and 0 at the bottom
INPUTS: img: image to be displayed
int_lims: the intensity limits to use when displaying the
image, int_lims[0] = min intensity to display, int_lims[1]
= max intensity to display [default min and max intensity
of image]
ax: if displaying an image on a subplot grid or on top of a
second image, optionally supply the axis on which to display
the image.
OUTPUTS: ax: the axis object after plotting if an axis object is
supplied
"""
#check if intensity limits have been provided, and if not set to min and
#max of image
if not int_lims:
int_lims = [np.nanmin(img), np.nanmax(img)]
#check if min and max are same (i.e. all values in img are equal)
if int_lims[0] == int_lims[1]:
int_lims[0] -= 1
int_lims[1] += 1
# take transpose of image to switch x and y dimensions and display with
# first pixel having coordinates 0,0
img = img.T
if not ax:
plt.imshow(img, cmap = 'gray', vmin = int_lims[0], vmax = int_lims[1], \
origin='lower')
else:
ax.imshow(img, cmap = 'gray', vmin = int_lims[0], vmax = int_lims[1], \
origin='lower')
#set axis to be scaled equally (assumes isotropic pixel dimensions), tight
#around the image
plt.axis('image')
plt.tight_layout()
return ax
def dispBinaryImage(binImg, cmap='Greens_r', ax=None):
"""
function to display a binary image that is stored in 'standard
orientation' with y-axis on the 2nd dimension and 0 at the bottom
INPUTS: binImg: binary image to be displayed
ax: if displaying an image on a subplot grid or on top of a
second image, optionally supply the axis on which to display
the image.
E.g.
fig = plt.figure()
ax = fig.gca()
ax = dispImage(ct_image, ax)
ax = dispBinaryImage(label_image, ax)
cmap: color map of the binary image to be displayed
(see: https://matplotlib.org/examples/color/colormaps_reference.html)
OUTPUTS: ax: the axis object after plotting if an axis object is
supplied
"""
# take transpose of image to switch x and y dimensions and display with
# first pixel having coordinates 0,0
binImg = binImg.T
# set the background pixels to NaNs so that imshow will display
# transparent
binImg = np.where(binImg == 0, np.nan, binImg)
if not ax:
plt.imshow(binImg, cmap = cmap, origin='lower')
else:
ax.imshow(binImg, cmap = cmap, origin='lower')
#set axis to be scaled equally (assumes isotropic pixel dimensions), tight
#around the image
plt.axis('image')
plt.tight_layout()
return ax
def dispImageAndBinaryOverlays(img, bin_imgs = [], bin_cols = [], int_lims = [], ax = None):
"""
function to display a grey-scale image with one or more binary images
overlaid
INPUTS: img: image to be displayed
bin_imgs: a list or np.array containing one or more binary images.
must have same dimensions as img
bin_cols: a list or np.array containing the matplotlib colormaps
to use for each binary image E.g. 'Greens_r', 'Reds_r'
Must have one colormap for each binary image
int_lims: the intensity limits to use when displaying the
image, int_lims[0] = min intensity to display, int_lims[1]
= max intensity to display [default min and max intensity
of image]
ax: if displaying an image on a subplot grid or on top of a
second image, optionally supply the axis on which to display
the image.
OUTPUTS: ax: the axis object after plotting if an axis object is
supplied
"""
#check if intensity limits have been provided, and if not set to min and
#max of image
if not int_lims:
int_lims = [np.nanmin(img), np.nanmax(img)]
#check if min and max are same (i.e. all values in img are equal)
if int_lims[0] == int_lims[1]:
int_lims[0] -= 1
int_lims[1] += 1
# take transpose of image to switch x and y dimensions and display with
# first pixel having coordinates 0,0
img = img.T
if not ax:
fig = plt.figure()
ax = fig.gca()
ax.imshow(img, cmap = 'gray', vmin = int_lims[0], vmax = int_lims[1], \
origin='lower')
for idx, binImg in enumerate(bin_imgs):
binImg = binImg.T
# check the binary images and img are the same shape
if binImg.shape != img.shape:
print('Error: binary image {} does not have same dimensions as image'.format(idx))
break
# set the colormap from bin_cols unless not enough colors have been provided
try:
cmap = bin_cols[idx]
except IndexError:
cmap = 'Greens_r'
print('WARNING: not enough colormaps provided - defaulting to Green')
ax.imshow(np.where(binImg == 0, np.nan, binImg), cmap=cmap,\
origin = 'lower')
#set axis to be scaled equally (assumes isotropic pixel dimensions), tight
#around the image
plt.axis('image')
plt.tight_layout()
return ax
# (Provided) This uses TensorFlow
# *** Available as part of UCL MPHY0025 (Information Processing in Medical Imaging) Assessed Coursework 2018-19 ***
# *** This code is with an Apache 2.0 license, University College London ***
import tensorflow as tf
import networks
import numpy as np
from matplotlib.pyplot import imread
# 1 - Read images and convert to "standard orientation"
files_image_test = ['../data/test/433.png', '../data/test/441.png']
images = np.stack([imread(fn)[::-1, ...].T for fn in files_image_test], axis=0)
# Normalise the test images so they have zero-mean and unit-variance
images = (images-images.mean(axis=(1, 2), keepdims=True)) / images.std(axis=(1, 2), keepdims=True)
image_size = [images.shape[1], images.shape[2]]