Pix2pix Photo-to-Street-Map Translation

Generate a street map from a satellite photo

Released in 2016, this model is an application of a powerful method for general-purpose image-to-image translation using conditional adversarial networks. The automatic learning of the loss function with the adversarial networks technique allows the same paradigm to generalize across a wide range of image translation tasks. The architecture enables an efficient aggregation of features of multiple scales through skip connections with concatenations. This particular model was trained to generate a street map from a satellite photo.

Number of layers: 56 | Parameter count: 54,419,459 | Trained size: 218 MB |

Training Set Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["Pix2pix Photo-to-Street-Map Translation"]
Out[1]=

Basic usage

Obtain a satellite photo:

In[2]:=
img = ImageResize[
  GeoImage[Entity["City", {"LasVegas", "Nevada", "UnitedStates"}], GeoRange -> 200], {256, 256}]
Out[2]=

Use the net to draw the street map:

In[3]:=
map = NetModel["Pix2pix Photo-to-Street-Map Translation"][img]
Out[3]=

Requirements

Wolfram Language 11.2 (September 2017) or above

Resource History

Reference