Unet分割的代码不含有数据集
资源文件列表:

unet-master - 副本/.idea/
unet-master - 副本/.idea/.gitignore 176B
unet-master - 副本/.idea/inspectionProfiles/
unet-master - 副本/.idea/inspectionProfiles/profiles_settings.xml 174B
unet-master - 副本/.idea/inspectionProfiles/Project_Default.xml 510B
unet-master - 副本/.idea/misc.xml 272B
unet-master - 副本/.idea/modules.xml 274B
unet-master - 副本/.idea/unet_42-new.iml 472B
unet-master - 副本/.idea/workspace.xml 7.83KB
unet-master - 副本/__pycache__/
unet-master - 副本/bmp转jpg.py 1KB
unet-master - 副本/bmp转png.py 1001B
unet-master - 副本/data/
unet-master - 副本/data/results/
unet-master - 副本/data/Test_Images/
unet-master - 副本/data/Test_Labels/
unet-master - 副本/data/Training_Images/
unet-master - 副本/data/Training_Labels/
unet-master - 副本/images/
unet-master - 副本/images/111/
unet-master - 副本/images/111/ISIC_0000000.jpg 48.79KB
unet-master - 副本/images/111/ISIC_0000000_res.png 4.76KB
unet-master - 副本/images/ISIC_0000000.jpg 48.79KB
unet-master - 副本/images/ISIC_0000000_res.png 4.72KB
unet-master - 副本/images/right.jpeg 26.57KB
unet-master - 副本/images/tmp/
unet-master - 副本/images/tmp/tmp_upload.jpeg 21.27KB
unet-master - 副本/images/UI/
unet-master - 副本/images/UI/logo.jpeg 33.37KB
unet-master - 副本/images/UI/lufei.png 215.7KB
unet-master - 副本/images/UI/right.jpeg 25.45KB
unet-master - 副本/images/UI/up.jpeg 27.88KB
unet-master - 副本/images/up.jpeg 21.27KB
unet-master - 副本/labelme2seg.py 1.02KB
unet-master - 副本/model/
unet-master - 副本/model/__init__.py
unet-master - 副本/model/__pycache__/
unet-master - 副本/model/__pycache__/__init__.cpython-37.pyc 141B
unet-master - 副本/model/__pycache__/__init__.cpython-38.pyc 161B
unet-master - 副本/model/__pycache__/unet_model.cpython-37.pyc 1.34KB
unet-master - 副本/model/__pycache__/unet_model.cpython-38.pyc 1.37KB
unet-master - 副本/model/__pycache__/unet_parts.cpython-37.pyc 2.79KB
unet-master - 副本/model/__pycache__/unet_parts.cpython-38.pyc 2.75KB
unet-master - 副本/model/unet_model.py 1.29KB
unet-master - 副本/model/unet_parts.py 3.39KB
unet-master - 副本/predict.py 1.72KB
unet-master - 副本/requirements.txt 143B
unet-master - 副本/results/
unet-master - 副本/results/confusion_matrix.csv 68B
unet-master - 副本/results/mIoU.png 15.56KB
unet-master - 副本/results/mPA.png 14.85KB
unet-master - 副本/results/Precision.png 14.7KB
unet-master - 副本/results/Recall.png 14.06KB
unet-master - 副本/test.py 4.17KB
unet-master - 副本/testdata/
unet-master - 副本/testdata/jsons/
unet-master - 副本/testdata/jsons/Case-1-U-1-1.json 65.34KB
unet-master - 副本/testdata/jsons/Case-2-U-2-2.json 77.86KB
unet-master - 副本/testdata/jsons/Case-2-U-2-3.json 89.63KB
unet-master - 副本/testdata/jsons/Case-3-U-5-0.json 83.71KB
unet-master - 副本/testdata/jsons/Case-3-U-5-2.json 83.12KB
unet-master - 副本/testdata/jsons/Case-3-U-5-3.json 80.91KB
unet-master - 副本/testdata/jsons/Case-4-U-8-0.json 84.39KB
unet-master - 副本/testdata/jsons/Case-4-U-8-1.json 85.46KB
unet-master - 副本/testdata/jsons/Case-5-U-10-0.json 78.91KB
unet-master - 副本/testdata/jsons/Case-5-U-10-1.json 82.62KB
unet-master - 副本/testdata/labels/
unet-master - 副本/testdata/labels/Case-1-U-1-1.png 3.18KB
unet-master - 副本/testdata/labels/Case-2-U-2-2.png 2.82KB
unet-master - 副本/testdata/labels/Case-2-U-2-3.png 3.25KB
unet-master - 副本/testdata/labels/Case-3-U-5-0.png 3.55KB
unet-master - 副本/testdata/labels/Case-3-U-5-2.png 3.59KB
unet-master - 副本/testdata/labels/Case-3-U-5-3.png 3.24KB
unet-master - 副本/testdata/labels/Case-4-U-8-0.png 3.02KB
unet-master - 副本/testdata/labels/Case-4-U-8-1.png 2.44KB
unet-master - 副本/testdata/labels/Case-5-U-10-0.png 3.47KB
unet-master - 副本/testdata/labels/Case-5-U-10-1.png 3.46KB
unet-master - 副本/train.py 2.46KB
unet-master - 副本/ui.py 8.04KB
unet-master - 副本/unet原文.pdf 1.57MB
unet-master - 副本/utils/
unet-master - 副本/utils/__pycache__/
unet-master - 副本/utils/__pycache__/dataset.cpython-37.pyc 1.81KB
unet-master - 副本/utils/__pycache__/dataset.cpython-38.pyc 1.84KB
unet-master - 副本/utils/__pycache__/utils_metrics.cpython-37.pyc 6.01KB
unet-master - 副本/utils/__pycache__/utils_metrics.cpython-38.pyc 6.35KB
unet-master - 副本/utils/data_remove_seg.py 730B
unet-master - 副本/utils/dataset.py 2.81KB
unet-master - 副本/utils/gen_split.py 1.57KB
unet-master - 副本/utils/label2png.py 1.41KB
unet-master - 副本/utils/utils_metrics.py 9.26KB
unet-master - 副本/切换镜像.txt 348B
资源介绍:
Unet分割的代码不含有数据集
U-Net: Convolutional Networks for Biomedical
Image Segmentation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox
Computer Science Department and BIOSS Centre for Biological Signalling Studies,
University of Freiburg, Germany
ronneber@informatik.uni-freiburg.de,
WWW home page: http://lmb.informatik.uni-freiburg.de/
Abstract. There is large consent that successful training of deep net-
works requires many thousand annotated training samples. In this pa-
per, we present a network and training strategy that relies on the strong
use of data augmentation to use the available annotated samples more
efficiently. The architecture consists of a contracting path to capture
context and a symmetric expanding path that enables precise localiza-
tion. We show that such a network can be trained end-to-end from very
few images and outperforms the prior best method (a sliding-window
convolutional network) on the ISBI challenge for segmentation of neu-
ronal structures in electron microscopic stacks. Using the same net-
work trained on transmitted light microscopy images (phase contrast
and DIC) we won the ISBI cell tracking challenge 2015 in these cate-
gories by a large margin. Moreover, the network is fast. Segmentation
of a 512x512 image takes less than a second on a recent GPU. The full
implementation (based on Caffe) and the trained networks are available
at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.
1 Introduction
In the last two years, deep convolutional networks have outperformed the state of
the art in many visual recognition tasks, e.g. [7,3]. While convolutional networks
have already existed for a long time [8], their success was limited due to the
size of the available training sets and the size of the considered networks. The
breakthrough by Krizhevsky et al. [7] was due to supervised training of a large
network with 8 layers and millions of parameters on the ImageNet dataset with
1 million training images. Since then, even larger and deeper networks have been
trained [12].
The typical use of convolutional networks is on classification tasks, where
the output to an image is a single class label. However, in many visual tasks,
especially in biomedical image processing, the desired output should include
localization, i.e., a class label is supposed to be assigned to each pixel. More-
over, thousands of training images are usually beyond reach in biomedical tasks.
Hence, Ciresan et al. [1] trained a network in a sliding-window setup to predict
the class label of each pixel by providing a local region (patch) around that pixel
arXiv:1505.04597v1 [cs.CV] 18 May 2015

2
copy and crop
input
image
tile
output
segmentation
map
641
128
256
512
1024
max pool 2x2
up-conv 2x2
conv 3x3, ReLU
572 x 572
284²
64
128
256
512
570 x 570
568 x 568
282²
280²140²
138²
136²
68²
66²
64²
32²
28²
56²
54²
52²
512
104²
102²
100²
200²
30²
198²
196²
392 x 392
390 x 390
388 x 388
388 x 388
1024
512
256
256
128
64
128
64 2
conv 1x1
Fig. 1. U-net architecture (example for 32x32 pixels in the lowest resolution). Each blue
box corresponds to a multi-channel feature map. The number of channels is denoted
on top of the box. The x-y-size is provided at the lower left edge of the box. White
boxes represent copied feature maps. The arrows denote the different operations.
as input. First, this network can localize. Secondly, the training data in terms
of patches is much larger than the number of training images. The resulting
network won the EM segmentation challenge at ISBI 2012 by a large margin.
Obviously, the strategy in Ciresan et al. [1] has two drawbacks. First, it
is quite slow because the network must be run separately for each patch, and
there is a lot of redundancy due to overlapping patches. Secondly, there is a
trade-off between localization accuracy and the use of context. Larger patches
require more max-pooling layers that reduce the localization accuracy, while
small patches allow the network to see only little context. More recent approaches
[11,4] proposed a classifier output that takes into account the features from
multiple layers. Good localization and the use of context are possible at the
same time.
In this paper, we build upon a more elegant architecture, the so-called “fully
convolutional network” [9]. We modify and extend this architecture such that it
works with very few training images and yields more precise segmentations; see
Figure 1. The main idea in [9] is to supplement a usual contracting network by
successive layers, where pooling operators are replaced by upsampling operators.
Hence, these layers increase the resolution of the output. In order to localize, high
resolution features from the contracting path are combined with the upsampled

3
Fig. 2. Overlap-tile strategy for seamless segmentation of arbitrary large images (here
segmentation of neuronal structures in EM stacks). Prediction of the segmentation in
the yellow area, requires image data within the blue area as input. Missing input data
is extrapolated by mirroring
output. A successive convolution layer can then learn to assemble a more precise
output based on this information.
One important modification in our architecture is that in the upsampling
part we have also a large number of feature channels, which allow the network
to propagate context information to higher resolution layers. As a consequence,
the expansive path is more or less symmetric to the contracting path, and yields
a u-shaped architecture. The network does not have any fully connected layers
and only uses the valid part of each convolution, i.e., the segmentation map only
contains the pixels, for which the full context is available in the input image.
This strategy allows the seamless segmentation of arbitrarily large images by an
overlap-tile strategy (see Figure 2). To predict the pixels in the border region
of the image, the missing context is extrapolated by mirroring the input image.
This tiling strategy is important to apply the network to large images, since
otherwise the resolution would be limited by the GPU memory.
As for our tasks there is very little training data available, we use excessive
data augmentation by applying elastic deformations to the available training im-
ages. This allows the network to learn invariance to such deformations, without
the need to see these transformations in the annotated image corpus. This is
particularly important in biomedical segmentation, since deformation used to
be the most common variation in tissue and realistic deformations can be simu-
lated efficiently. The value of data augmentation for learning invariance has been
shown in Dosovitskiy et al. [2] in the scope of unsupervised feature learning.
Another challenge in many cell segmentation tasks is the separation of touch-
ing objects of the same class; see Figure 3. To this end, we propose the use of
a weighted loss, where the separating background labels between touching cells
obtain a large weight in the loss function.
The resulting network is applicable to various biomedical segmentation prob-
lems. In this paper, we show results on the segmentation of neuronal structures
in EM stacks (an ongoing competition started at ISBI 2012), where we out-