diff --git a/ivy_models/bert/README.rst b/ivy_models/bert/README.rst
new file mode 100644
index 00000000..cf38ffc2
--- /dev/null
+++ b/ivy_models/bert/README.rst
@@ -0,0 +1,77 @@
+.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/logo.png?raw=true#gh-light-mode-only
+ :width: 100%
+ :class: only-light
+
+.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/logo_dark.png?raw=true#gh-dark-mode-only
+ :width: 100%
+ :class: only-dark
+
+
+.. raw:: html
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+BERT
+===========
+
+`BERT `_ short for Bidirectional Encoder Representations from Transformers, differentiates itself from
+recent language representation models by its focus on pretraining deep bidirectional representations from unannotated text.
+This approach involves considering both left and right context in all layers.
+Consequently, the pretrained BERT model can be enhanced with just a single additional output layer to excel in various tasks,
+such as question answering and language inference. This achievement is possible without extensive modifications to task-specific architecture.
+
+Getting started
+-----------------
+
+.. code-block:: python
+
+ import ivy
+ ivy.set_backend("torch")
+
+ # Instantiate Bert
+ ivy_bert = ivy_models.bert_base_uncased(pretrained=True)
+
+ # Convert the input data to Ivy tensors
+ ivy_inputs = {k: ivy.asarray(v.numpy()) for k, v in inputs.items()}
+
+ # Compile the Ivy BERT model with the Ivy input tensors
+ ivy_bert.compile(kwargs=ivy_inputs)
+
+ # Pass the Ivy input tensors through the Ivy BERT model and obtain the pooler output
+ ivy_output = ivy_bert(**ivy_inputs)['pooler_output']
+
+
+See `this demo `_ for more usage example.
+
+Citation
+--------
+
+::
+
+ @article{
+ title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
+ author={Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova},
+ journal={arXiv preprint arXiv:1810.04805},
+ year={2019}
+ }
+
+
+ @article{lenton2021ivy,
+ title={Ivy: Templated deep learning for inter-framework portability},
+ author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald},
+ journal={arXiv preprint arXiv:2102.02886},
+ year={2021}
+ }
diff --git a/ivy_models/unet/README.rst b/ivy_models/unet/README.rst
new file mode 100644
index 00000000..9cf4d558
--- /dev/null
+++ b/ivy_models/unet/README.rst
@@ -0,0 +1,93 @@
+.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/logo.png?raw=true#gh-light-mode-only
+ :width: 100%
+ :class: only-light
+
+.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/logo_dark.png?raw=true#gh-dark-mode-only
+ :width: 100%
+ :class: only-dark
+
+
+.. raw:: html
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+U-Net
+===========
+
+`Unet `_ The UNET architecture and training approach effectively leverage data augmentation to make the most of
+available annotated samples, even with limited data. The design features a contracting path for context capture and a symmetric expanding path
+for precise localization. Notably, this UNET network achieves superior performance with minimal images during end-to-end training.
+It surpasses previous methods, including a sliding-window convolutional network, in the ISBI challenge for segmenting neuronal structures in
+electron microscopic stacks. Additionally, the same UNET model excels in the ISBI cell tracking challenge for transmitted light microscopy images
+(phase contrast and DIC), showcasing its versatility. Furthermore, the network demonstrates remarkable speed, capable of segmenting a 512x512 image
+in under a second using a modern GPU.
+
+Getting started
+-----------------
+
+.. code-block:: python
+
+ import ivy
+ import ivy_models
+ ivy.set_backend("torch")
+
+ # load the unet model from ivy_models
+ ivy_unet = ivy_models.unet_carvana(n_channels=3, n_classes=2, pretrained=True)
+
+ # Preprocess image with preprocess function
+ from PIL import Image
+ !wget https://raw.githubusercontent.com/unifyai/models/master/images/car.jpg
+ filename = "car.jpg"
+ full_img = Image.open(filename)
+ torch_img = torch.from_numpy(preprocess(None, full_img, 0.5, False)).unsqueeze(0).to("cuda")
+
+ # Convert to ivy
+ ivy.set_backend("torch")
+ img = ivy.asarray(torch_img.permute((0, 2, 3, 1)), dtype="float32", device="gpu:0")
+ img_numpy = img.cpu().numpy()
+
+ # Compile the forward pass
+ ivy_unet.compile(args=(img,))
+
+ # Generating the mask
+ output = ivy_unet(img)
+ output = ivy.interpolate(output.permute((0, 3, 1, 2)), (full_img.size[1], full_img.size[0]), mode="bilinear")
+ mask = output.argmax(axis=1)
+ mask = ivy.squeeze(mask[0], axis=None).to_numpy()
+ result = mask_to_image(mask, [0,1])
+
+
+See `this demo `_ for more usage example.
+
+Citation
+--------
+
+::
+
+ @article{
+ title={U-Net: Convolutional Networks for Biomedical Image Segmentation},
+ author={Olaf Ronneberger, Philipp Fischer and Thomas Brox},
+ journal={arXiv preprint arXiv:1505.04597},
+ year={2015}
+ }
+
+
+ @article{lenton2021ivy,
+ title={Ivy: Templated deep learning for inter-framework portability},
+ author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald},
+ journal={arXiv preprint arXiv:2102.02886},
+ year={2021}
+ }