diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
new file mode 100644
index 0000000..c0c4235
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,46 @@
+## Summary
+
+
+
+Motivation:
+
+
+
+Changes:
+- First change
+- Second change
+
+
+
+Fixes # (issue)
+
+## Test plan
+
+
+
+
+
+Screenshot of before versus after:
+
+Link to wandb report:
+
+## Type of change
+
+- [ ] Documentation only change
+- [ ] Bug fix (non-breaking change that fixes an issue)
+- [ ] Refactor (non-breaking change that isn't a bug)
+- [ ] New feature (non-breaking change that adds a new functionality)
+- [ ] Breaking change (fix or feature that would break some existing functionality downstream)
+
+## Type of requested review
+
+- [ ] I want a high level review.
+- [ ] I want a design review.
+- [ ] I want a review of the implementation.
+
+## Checklist:
+
+- [ ] I have performed manual end-to-end testing of the feature in my environment.
+- [ ] I have added tests that show that the PR is functional.
+- [ ] I have documented my changes.
+- [ ] I have added relevant collaborators to review the PR before merge.
diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml
new file mode 100644
index 0000000..0cccb07
--- /dev/null
+++ b/.github/workflows/lint.yml
@@ -0,0 +1,33 @@
+name: Lint
+
+on:
+ push:
+ branches:
+ - '**'
+
+jobs:
+ run-linters:
+ name: Run linters
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Check out Git repository
+ uses: actions/checkout@v3
+
+ - name: Set up Python
+ uses: actions/setup-python@v1
+ with:
+ python-version: 3.7
+
+ - name: Install Python dependencies
+ run: pip install black
+
+ - name: Run linters
+ uses: wearerequired/lint-action@v2
+ with:
+ auto_fix: true
+ black: true
+ black_auto_fix: true
+ git_name: "Lint Action"
+ git_email: "lint-action@stanford.edu"
+
diff --git a/README.md b/README.md
index 36a6a97..3e92cc2 100644
--- a/README.md
+++ b/README.md
@@ -1 +1,21 @@
# AO-Grasp: Articulated Object Grasp Generation
+
+## Installation
+
+1. Create a conda environment:
+```
+conda create -n python==3.7
+```
+
+2. Install PyTorch:
+
+```
+pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
+```
+
+3. Install PointNet++. In the `ao-grasp` conda env, install PointNet2_PyTorch from the directory contained within this repo by running the following commands:
+```
+cd ao-grasp/ao-grasp/Pointnet2_PyTorch
+pip install -r requirements.txt
+pip install -e .
+```
\ No newline at end of file
diff --git a/ao-grasp/models/Pointnet2_PyTorch/.gitignore b/ao-grasp/models/Pointnet2_PyTorch/.gitignore
new file mode 100644
index 0000000..5c3f7cf
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/.gitignore
@@ -0,0 +1,25 @@
+__pycache__
+*.pth*
+.autoenv*
+runs
+build
+checkpoints
+*.prof
+.lvimrc
+.vimtags
+.ccls
+.ccls-cache/
+dist/
+pointnet2.egg-info/
+*.zip
+*.so
+.tox/
+.mypy_cache
+**/*.pyc
+pointnet2_ops.egg-info/
+
+
+pointnet2/data/modelnet40_normal_resampled/
+pointnet2/data/modelnet40_normal_resampled_cache/
+pointnet2/data/modelnet40_ply_hdf5_2048/
+outputs/
diff --git a/ao-grasp/models/Pointnet2_PyTorch/README.rst b/ao-grasp/models/Pointnet2_PyTorch/README.rst
new file mode 100644
index 0000000..d53f6bf
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/README.rst
@@ -0,0 +1,126 @@
+Pointnet2/Pointnet++ PyTorch
+============================
+
+
+**Project Status**: Unmaintained. Due to finite time, I have no plans to update this code and I will not be responding to issues.
+
+* Implemention of Pointnet2/Pointnet++ written in `PyTorch `_.
+
+* Supports Multi-GPU via `nn.DataParallel `_.
+
+* Supports PyTorch version >= 1.0.0. Use `v1.0 `_
+ for support of older versions of PyTorch.
+
+
+See the official code release for the paper (in tensorflow), `charlesq34/pointnet2 `_,
+for official model definitions and hyper-parameters.
+
+The custom ops used by Pointnet++ are currently **ONLY** supported on the GPU using CUDA.
+
+Setup
+-----
+
+* Install ``python`` -- This repo is tested with ``{3.6, 3.7}``
+
+* Install ``pytorch`` with CUDA -- This repo is tested with ``{1.4, 1.5}``.
+ It may work with versions newer than ``1.5``, but this is not guaranteed.
+
+
+* Install dependencies
+
+ ::
+
+ pip install -r requirements.txt
+
+
+
+
+
+
+
+Example training
+----------------
+
+Install with: ``pip install -e .``
+
+There example training script can be found in ``pointnet2/train.py``. The training examples are built
+using `PyTorch Lightning `_ and `Hydra `_.
+
+
+A classifion pointnet can be trained as
+
+::
+
+ python pointnet2/train.py task=cls
+
+ # Or with model=msg for multi-scale grouping
+
+ python pointnet2/train.py task=cls model=msg
+
+
+Similarly, semantic segmentation can be trained by changing the task to ``semseg``
+
+::
+
+ python pointnet2/train.py task=semseg
+
+
+
+Multi-GPU training can be enabled by passing a list of GPU ids to use, for instance
+
+::
+
+ python pointnet2/train.py task=cls gpus=[0,1,2,3]
+
+
+Building only the CUDA kernels
+----------------------------------
+
+
+::
+
+ pip install pointnet2_ops_lib/.
+
+ # Or if you would like to install them directly (this can also be used in a requirements.txt)
+
+ pip install "git+git://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"
+
+
+
+
+
+
+Contributing
+------------
+
+This repository uses `black `_ for linting and style enforcement on python code.
+For c++/cuda code,
+`clang-format `_ is used for style. The simplest way to
+comply with style is via `pre-commit `_
+
+::
+
+ pip install pre-commit
+ pre-commit install
+
+
+
+Citation
+--------
+
+::
+
+ @article{pytorchpointnet++,
+ Author = {Erik Wijmans},
+ Title = {Pointnet++ Pytorch},
+ Journal = {https://github.com/erikwijmans/Pointnet2_PyTorch},
+ Year = {2018}
+ }
+
+ @inproceedings{qi2017pointnet++,
+ title={Pointnet++: Deep hierarchical feature learning on point sets in a metric space},
+ author={Qi, Charles Ruizhongtai and Yi, Li and Su, Hao and Guibas, Leonidas J},
+ booktitle={Advances in Neural Information Processing Systems},
+ pages={5099--5108},
+ year={2017}
+ }
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/__init__.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/__init__.py
new file mode 100644
index 0000000..ebc06de
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/__init__.py
@@ -0,0 +1,2 @@
+from pointnet2 import data, models, utils
+from pointnet2._version import __version__
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/_version.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/_version.py
new file mode 100644
index 0000000..528787c
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/_version.py
@@ -0,0 +1 @@
+__version__ = "3.0.0"
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/config.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/config.yaml
new file mode 100644
index 0000000..4336fca
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/config.yaml
@@ -0,0 +1,19 @@
+defaults:
+ - task: cls
+ - model: ssg
+ - task_model: ${defaults.0.task}-${defaults.1.model}
+
+hydra:
+ run:
+ dir: outputs
+
+gpus:
+ - 0
+
+optimizer: ???
+
+task_model: ???
+
+model: ???
+
+distrib_backend: dp
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/model/msg.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/model/msg.yaml
new file mode 100644
index 0000000..d818a01
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/model/msg.yaml
@@ -0,0 +1,2 @@
+model:
+ use_xyz: True
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/model/ssg.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/model/ssg.yaml
new file mode 100644
index 0000000..d818a01
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/model/ssg.yaml
@@ -0,0 +1,2 @@
+model:
+ use_xyz: True
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task/cls.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task/cls.yaml
new file mode 100644
index 0000000..d5983e3
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task/cls.yaml
@@ -0,0 +1,12 @@
+optimizer:
+ weight_decay: 0.0
+ lr: 1e-3
+ lr_decay: 0.7
+ bn_momentum: 0.5
+ bnm_decay: 0.5
+ decay_step: 2e4
+
+
+num_points: 4096
+epochs: 200
+batch_size: 32
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task/semseg.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task/semseg.yaml
new file mode 100644
index 0000000..81e3986
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task/semseg.yaml
@@ -0,0 +1,11 @@
+optimizer:
+ weight_decay: 0.0
+ lr: 1e-3
+ lr_decay: 0.5
+ bn_momentum: 0.5
+ bnm_decay: 0.5
+ decay_step: 3e5
+
+num_points: 4096
+epochs: 50
+batch_size: 24
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/cls-msg.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/cls-msg.yaml
new file mode 100644
index 0000000..83511e3
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/cls-msg.yaml
@@ -0,0 +1,3 @@
+task_model:
+ class: pointnet2.models.PointNet2ClassificationMSG
+ name: cls-msg
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/cls-ssg.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/cls-ssg.yaml
new file mode 100644
index 0000000..d7d0e19
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/cls-ssg.yaml
@@ -0,0 +1,3 @@
+task_model:
+ class: pointnet2.models.PointNet2ClassificationSSG
+ name: cls-ssg
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/semseg-msg.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/semseg-msg.yaml
new file mode 100644
index 0000000..416833b
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/semseg-msg.yaml
@@ -0,0 +1,3 @@
+task_model:
+ class: pointnet2.models.PointNet2SemSegMSG
+ name: sem-msg
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/semseg-ssg.yaml b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/semseg-ssg.yaml
new file mode 100644
index 0000000..80af51a
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/config/task_model/semseg-ssg.yaml
@@ -0,0 +1,3 @@
+task_model:
+ class: pointnet2.models.PointNet2SemSegSSG
+ name: sem-ssg
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/.gitignore b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/.gitignore
new file mode 100644
index 0000000..5fe8539
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/.gitignore
@@ -0,0 +1,2 @@
+indoor3d_sem_seg_hdf5_data
+modelnet40_ply_hdf5_2048
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/Indoor3DSemSegLoader.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/Indoor3DSemSegLoader.py
new file mode 100644
index 0000000..1a449a7
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/Indoor3DSemSegLoader.py
@@ -0,0 +1,105 @@
+import os
+import shlex
+import subprocess
+
+import h5py
+import numpy as np
+import torch
+import torch.utils.data as data
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+
+
+def _get_data_files(list_filename):
+ with open(list_filename) as f:
+ return [line.rstrip() for line in f]
+
+
+def _load_data_file(name):
+ f = h5py.File(name, "r")
+ data = f["data"][:]
+ label = f["label"][:]
+ return data, label
+
+
+class Indoor3DSemSeg(data.Dataset):
+ def __init__(self, num_points, train=True, download=True, data_precent=1.0):
+ super().__init__()
+ self.data_precent = data_precent
+ self.folder = "indoor3d_sem_seg_hdf5_data"
+ self.data_dir = os.path.join(BASE_DIR, self.folder)
+ self.url = (
+ "https://shapenet.cs.stanford.edu/media/indoor3d_sem_seg_hdf5_data.zip"
+ )
+
+ if download and not os.path.exists(self.data_dir):
+ zipfile = os.path.join(BASE_DIR, os.path.basename(self.url))
+ subprocess.check_call(
+ shlex.split("curl {} -o {}".format(self.url, zipfile))
+ )
+
+ subprocess.check_call(
+ shlex.split("unzip {} -d {}".format(zipfile, BASE_DIR))
+ )
+
+ subprocess.check_call(shlex.split("rm {}".format(zipfile)))
+
+ self.train, self.num_points = train, num_points
+
+ all_files = _get_data_files(os.path.join(self.data_dir, "all_files.txt"))
+ room_filelist = _get_data_files(
+ os.path.join(self.data_dir, "room_filelist.txt")
+ )
+
+ data_batchlist, label_batchlist = [], []
+ for f in all_files:
+ data, label = _load_data_file(os.path.join(BASE_DIR, f))
+ data_batchlist.append(data)
+ label_batchlist.append(label)
+
+ data_batches = np.concatenate(data_batchlist, 0)
+ labels_batches = np.concatenate(label_batchlist, 0)
+
+ test_area = "Area_5"
+ train_idxs, test_idxs = [], []
+ for i, room_name in enumerate(room_filelist):
+ if test_area in room_name:
+ test_idxs.append(i)
+ else:
+ train_idxs.append(i)
+
+ if self.train:
+ self.points = data_batches[train_idxs, ...]
+ self.labels = labels_batches[train_idxs, ...]
+ else:
+ self.points = data_batches[test_idxs, ...]
+ self.labels = labels_batches[test_idxs, ...]
+
+ def __getitem__(self, idx):
+ pt_idxs = np.arange(0, self.num_points)
+ np.random.shuffle(pt_idxs)
+
+ current_points = torch.from_numpy(self.points[idx, pt_idxs].copy()).float()
+ current_labels = torch.from_numpy(self.labels[idx, pt_idxs].copy()).long()
+
+ return current_points, current_labels
+
+ def __len__(self):
+ return int(self.points.shape[0] * self.data_precent)
+
+ def set_num_points(self, pts):
+ self.num_points = pts
+
+ def randomize(self):
+ pass
+
+
+if __name__ == "__main__":
+ dset = Indoor3DSemSeg(16, "./", train=True)
+ print(dset[0])
+ print(len(dset))
+ dloader = torch.utils.data.DataLoader(dset, batch_size=32, shuffle=True)
+ for i, data in enumerate(dloader, 0):
+ inputs, labels = data
+ if i == len(dloader) - 1:
+ print(inputs.size())
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/ModelNet40Loader.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/ModelNet40Loader.py
new file mode 100644
index 0000000..f4cfff3
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/ModelNet40Loader.py
@@ -0,0 +1,161 @@
+import os
+import os.path as osp
+import shlex
+import shutil
+import subprocess
+
+import lmdb
+import msgpack_numpy
+import numpy as np
+import torch
+import torch.utils.data as data
+import tqdm
+
+BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+
+
+def pc_normalize(pc):
+ l = pc.shape[0]
+ centroid = np.mean(pc, axis=0)
+ pc = pc - centroid
+ m = np.max(np.sqrt(np.sum(pc**2, axis=1)))
+ pc = pc / m
+ return pc
+
+
+class ModelNet40Cls(data.Dataset):
+ def __init__(self, num_points, transforms=None, train=True, download=True):
+ super().__init__()
+
+ self.transforms = transforms
+
+ self.set_num_points(num_points)
+ self._cache = os.path.join(BASE_DIR, "modelnet40_normal_resampled_cache")
+
+ if not osp.exists(self._cache):
+ self.folder = "modelnet40_normal_resampled"
+ self.data_dir = os.path.join(BASE_DIR, self.folder)
+ self.url = (
+ "https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip"
+ )
+
+ if download and not os.path.exists(self.data_dir):
+ zipfile = os.path.join(BASE_DIR, os.path.basename(self.url))
+ subprocess.check_call(
+ shlex.split("curl {} -o {}".format(self.url, zipfile))
+ )
+
+ subprocess.check_call(
+ shlex.split("unzip {} -d {}".format(zipfile, BASE_DIR))
+ )
+
+ subprocess.check_call(shlex.split("rm {}".format(zipfile)))
+
+ self.train = train
+ self.set_num_points(num_points)
+
+ self.catfile = os.path.join(self.data_dir, "modelnet40_shape_names.txt")
+ self.cat = [line.rstrip() for line in open(self.catfile)]
+ self.classes = dict(zip(self.cat, range(len(self.cat))))
+
+ os.makedirs(self._cache)
+
+ print("Converted to LMDB for faster dataloading while training")
+ for split in ["train", "test"]:
+ if split == "train":
+ shape_ids = [
+ line.rstrip()
+ for line in open(
+ os.path.join(self.data_dir, "modelnet40_train.txt")
+ )
+ ]
+ else:
+ shape_ids = [
+ line.rstrip()
+ for line in open(
+ os.path.join(self.data_dir, "modelnet40_test.txt")
+ )
+ ]
+
+ shape_names = ["_".join(x.split("_")[0:-1]) for x in shape_ids]
+ # list of (shape_name, shape_txt_file_path) tuple
+ self.datapath = [
+ (
+ shape_names[i],
+ os.path.join(self.data_dir, shape_names[i], shape_ids[i])
+ + ".txt",
+ )
+ for i in range(len(shape_ids))
+ ]
+
+ with lmdb.open(
+ osp.join(self._cache, split), map_size=1 << 36
+ ) as lmdb_env, lmdb_env.begin(write=True) as txn:
+ for i in tqdm.trange(len(self.datapath)):
+ fn = self.datapath[i]
+ point_set = np.loadtxt(fn[1], delimiter=",").astype(np.float32)
+ cls = self.classes[self.datapath[i][0]]
+ cls = int(cls)
+
+ txn.put(
+ str(i).encode(),
+ msgpack_numpy.packb(
+ dict(pc=point_set, lbl=cls), use_bin_type=True
+ ),
+ )
+
+ shutil.rmtree(self.data_dir)
+
+ self._lmdb_file = osp.join(self._cache, "train" if train else "test")
+ with lmdb.open(self._lmdb_file, map_size=1 << 36) as lmdb_env:
+ self._len = lmdb_env.stat()["entries"]
+
+ self._lmdb_env = None
+
+ def __getitem__(self, idx):
+ if self._lmdb_env is None:
+ self._lmdb_env = lmdb.open(
+ self._lmdb_file, map_size=1 << 36, readonly=True, lock=False
+ )
+
+ with self._lmdb_env.begin(buffers=True) as txn:
+ ele = msgpack_numpy.unpackb(txn.get(str(idx).encode()), raw=False)
+
+ point_set = ele["pc"]
+
+ pt_idxs = np.arange(0, self.num_points)
+ np.random.shuffle(pt_idxs)
+
+ point_set = point_set[pt_idxs, :]
+ point_set[:, 0:3] = pc_normalize(point_set[:, 0:3])
+
+ if self.transforms is not None:
+ point_set = self.transforms(point_set)
+
+ return point_set, ele["lbl"]
+
+ def __len__(self):
+ return self._len
+
+ def set_num_points(self, pts):
+ self.num_points = min(int(1e4), pts)
+
+
+if __name__ == "__main__":
+ from torchvision import transforms
+ import data_utils as d_utils
+
+ transforms = transforms.Compose(
+ [
+ d_utils.PointcloudToTensor(),
+ d_utils.PointcloudRotate(axis=np.array([1, 0, 0])),
+ d_utils.PointcloudScale(),
+ d_utils.PointcloudTranslate(),
+ d_utils.PointcloudJitter(),
+ ]
+ )
+ dset = ModelNet40Cls(16, train=True, transforms=transforms)
+ print(dset[0][0])
+ print(dset[0][1])
+ print(len(dset))
+ dloader = torch.utils.data.DataLoader(dset, batch_size=32, shuffle=True)
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/__init__.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/__init__.py
new file mode 100644
index 0000000..ff9bc73
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/__init__.py
@@ -0,0 +1,2 @@
+from .Indoor3DSemSegLoader import Indoor3DSemSeg
+from .ModelNet40Loader import ModelNet40Cls
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/data_utils.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/data_utils.py
new file mode 100644
index 0000000..7784443
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/data/data_utils.py
@@ -0,0 +1,141 @@
+import numpy as np
+import torch
+
+
+def angle_axis(angle, axis):
+ # type: (float, np.ndarray) -> float
+ r"""Returns a 4x4 rotation matrix that performs a rotation around axis by angle
+
+ Parameters
+ ----------
+ angle : float
+ Angle to rotate by
+ axis: np.ndarray
+ Axis to rotate about
+
+ Returns
+ -------
+ torch.Tensor
+ 3x3 rotation matrix
+ """
+ u = axis / np.linalg.norm(axis)
+ cosval, sinval = np.cos(angle), np.sin(angle)
+
+ # yapf: disable
+ cross_prod_mat = np.array([[0.0, -u[2], u[1]],
+ [u[2], 0.0, -u[0]],
+ [-u[1], u[0], 0.0]])
+
+ R = torch.from_numpy(
+ cosval * np.eye(3)
+ + sinval * cross_prod_mat
+ + (1.0 - cosval) * np.outer(u, u)
+ )
+ # yapf: enable
+ return R.float()
+
+
+class PointcloudScale(object):
+ def __init__(self, lo=0.8, hi=1.25):
+ self.lo, self.hi = lo, hi
+
+ def __call__(self, points):
+ scaler = np.random.uniform(self.lo, self.hi)
+ points[:, 0:3] *= scaler
+ return points
+
+
+class PointcloudRotate(object):
+ def __init__(self, axis=np.array([0.0, 1.0, 0.0])):
+ self.axis = axis
+
+ def __call__(self, points):
+ rotation_angle = np.random.uniform() * 2 * np.pi
+ rotation_matrix = angle_axis(rotation_angle, self.axis)
+
+ normals = points.size(1) > 3
+ if not normals:
+ return torch.matmul(points, rotation_matrix.t())
+ else:
+ pc_xyz = points[:, 0:3]
+ pc_normals = points[:, 3:]
+ points[:, 0:3] = torch.matmul(pc_xyz, rotation_matrix.t())
+ points[:, 3:] = torch.matmul(pc_normals, rotation_matrix.t())
+
+ return points
+
+
+class PointcloudRotatePerturbation(object):
+ def __init__(self, angle_sigma=0.06, angle_clip=0.18):
+ self.angle_sigma, self.angle_clip = angle_sigma, angle_clip
+
+ def _get_angles(self):
+ angles = np.clip(
+ self.angle_sigma * np.random.randn(3), -self.angle_clip, self.angle_clip
+ )
+
+ return angles
+
+ def __call__(self, points):
+ angles = self._get_angles()
+ Rx = angle_axis(angles[0], np.array([1.0, 0.0, 0.0]))
+ Ry = angle_axis(angles[1], np.array([0.0, 1.0, 0.0]))
+ Rz = angle_axis(angles[2], np.array([0.0, 0.0, 1.0]))
+
+ rotation_matrix = torch.matmul(torch.matmul(Rz, Ry), Rx)
+
+ normals = points.size(1) > 3
+ if not normals:
+ return torch.matmul(points, rotation_matrix.t())
+ else:
+ pc_xyz = points[:, 0:3]
+ pc_normals = points[:, 3:]
+ points[:, 0:3] = torch.matmul(pc_xyz, rotation_matrix.t())
+ points[:, 3:] = torch.matmul(pc_normals, rotation_matrix.t())
+
+ return points
+
+
+class PointcloudJitter(object):
+ def __init__(self, std=0.01, clip=0.05):
+ self.std, self.clip = std, clip
+
+ def __call__(self, points):
+ jittered_data = (
+ points.new(points.size(0), 3)
+ .normal_(mean=0.0, std=self.std)
+ .clamp_(-self.clip, self.clip)
+ )
+ points[:, 0:3] += jittered_data
+ return points
+
+
+class PointcloudTranslate(object):
+ def __init__(self, translate_range=0.1):
+ self.translate_range = translate_range
+
+ def __call__(self, points):
+ translation = np.random.uniform(-self.translate_range, self.translate_range)
+ points[:, 0:3] += translation
+ return points
+
+
+class PointcloudToTensor(object):
+ def __call__(self, points):
+ return torch.from_numpy(points).float()
+
+
+class PointcloudRandomInputDropout(object):
+ def __init__(self, max_dropout_ratio=0.875):
+ assert max_dropout_ratio >= 0 and max_dropout_ratio < 1
+ self.max_dropout_ratio = max_dropout_ratio
+
+ def __call__(self, points):
+ pc = points.numpy()
+
+ dropout_ratio = np.random.random() * self.max_dropout_ratio # 0~0.875
+ drop_idx = np.where(np.random.random((pc.shape[0])) <= dropout_ratio)[0]
+ if len(drop_idx) > 0:
+ pc[drop_idx] = pc[0] # set to the first point
+
+ return torch.from_numpy(pc).float()
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/__init__.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/__init__.py
new file mode 100644
index 0000000..f3ca798
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/__init__.py
@@ -0,0 +1,4 @@
+from pointnet2.models.pointnet2_msg_cls import PointNet2ClassificationMSG
+from pointnet2.models.pointnet2_msg_sem import PointNet2SemSegMSG
+from pointnet2.models.pointnet2_ssg_cls import PointNet2ClassificationSSG
+from pointnet2.models.pointnet2_ssg_sem import PointNet2SemSegSSG
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_msg_cls.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_msg_cls.py
new file mode 100644
index 0000000..892c0a1
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_msg_cls.py
@@ -0,0 +1,44 @@
+import pytorch_lightning as pl
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from pointnet2_ops.pointnet2_modules import PointnetSAModule, PointnetSAModuleMSG
+
+from pointnet2.models.pointnet2_ssg_cls import PointNet2ClassificationSSG
+
+
+class PointNet2ClassificationMSG(PointNet2ClassificationSSG):
+ def _build_model(self):
+ super()._build_model()
+
+ self.SA_modules = nn.ModuleList()
+ self.SA_modules.append(
+ PointnetSAModuleMSG(
+ npoint=512,
+ radii=[0.1, 0.2, 0.4],
+ nsamples=[16, 32, 128],
+ mlps=[[3, 32, 32, 64], [3, 64, 64, 128], [3, 64, 96, 128]],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+
+ input_channels = 64 + 128 + 128
+ self.SA_modules.append(
+ PointnetSAModuleMSG(
+ npoint=128,
+ radii=[0.2, 0.4, 0.8],
+ nsamples=[32, 64, 128],
+ mlps=[
+ [input_channels, 64, 64, 128],
+ [input_channels, 128, 128, 256],
+ [input_channels, 128, 128, 256],
+ ],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ self.SA_modules.append(
+ PointnetSAModule(
+ mlp=[128 + 256 + 256, 256, 512, 1024],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_msg_sem.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_msg_sem.py
new file mode 100644
index 0000000..f1b02a7
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_msg_sem.py
@@ -0,0 +1,74 @@
+from collections import namedtuple
+
+import pytorch_lightning as pl
+import torch
+import torch.nn as nn
+from pointnet2_ops.pointnet2_modules import PointnetFPModule, PointnetSAModuleMSG
+
+from pointnet2.models.pointnet2_ssg_sem import PointNet2SemSegSSG
+
+
+class PointNet2SemSegMSG(PointNet2SemSegSSG):
+ def _build_model(self):
+ self.SA_modules = nn.ModuleList()
+ c_in = 6
+ self.SA_modules.append(
+ PointnetSAModuleMSG(
+ npoint=1024,
+ radii=[0.05, 0.1],
+ nsamples=[16, 32],
+ mlps=[[c_in, 16, 16, 32], [c_in, 32, 32, 64]],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ c_out_0 = 32 + 64
+
+ c_in = c_out_0
+ self.SA_modules.append(
+ PointnetSAModuleMSG(
+ npoint=256,
+ radii=[0.1, 0.2],
+ nsamples=[16, 32],
+ mlps=[[c_in, 64, 64, 128], [c_in, 64, 96, 128]],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ c_out_1 = 128 + 128
+
+ c_in = c_out_1
+ self.SA_modules.append(
+ PointnetSAModuleMSG(
+ npoint=64,
+ radii=[0.2, 0.4],
+ nsamples=[16, 32],
+ mlps=[[c_in, 128, 196, 256], [c_in, 128, 196, 256]],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ c_out_2 = 256 + 256
+
+ c_in = c_out_2
+ self.SA_modules.append(
+ PointnetSAModuleMSG(
+ npoint=16,
+ radii=[0.4, 0.8],
+ nsamples=[16, 32],
+ mlps=[[c_in, 256, 256, 512], [c_in, 256, 384, 512]],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ c_out_3 = 512 + 512
+
+ self.FP_modules = nn.ModuleList()
+ self.FP_modules.append(PointnetFPModule(mlp=[256 + 6, 128, 128]))
+ self.FP_modules.append(PointnetFPModule(mlp=[512 + c_out_0, 256, 256]))
+ self.FP_modules.append(PointnetFPModule(mlp=[512 + c_out_1, 512, 512]))
+ self.FP_modules.append(PointnetFPModule(mlp=[c_out_3 + c_out_2, 512, 512]))
+
+ self.fc_lyaer = nn.Sequential(
+ nn.Conv1d(128, 128, kernel_size=1, bias=False),
+ nn.BatchNorm1d(128),
+ nn.ReLU(True),
+ nn.Dropout(0.5),
+ nn.Conv1d(128, 13, kernel_size=1),
+ )
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_ssg_cls.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_ssg_cls.py
new file mode 100644
index 0000000..ba89365
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_ssg_cls.py
@@ -0,0 +1,230 @@
+import pytorch_lightning as pl
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim.lr_scheduler as lr_sched
+from pointnet2_ops.pointnet2_modules import PointnetFPModule, PointnetSAModule
+from torch.utils.data import DataLoader, DistributedSampler
+from torchvision import transforms
+
+import pointnet2.data.data_utils as d_utils
+from pointnet2.data.ModelNet40Loader import ModelNet40Cls
+
+
+def set_bn_momentum_default(bn_momentum):
+ def fn(m):
+ if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)):
+ m.momentum = bn_momentum
+
+ return fn
+
+
+class BNMomentumScheduler(lr_sched.LambdaLR):
+ def __init__(self, model, bn_lambda, last_epoch=-1, setter=set_bn_momentum_default):
+ if not isinstance(model, nn.Module):
+ raise RuntimeError(
+ "Class '{}' is not a PyTorch nn Module".format(type(model)._name_)
+ )
+
+ self.model = model
+ self.setter = setter
+ self.lmbd = bn_lambda
+
+ self.step(last_epoch + 1)
+ self.last_epoch = last_epoch
+
+ def step(self, epoch=None):
+ if epoch is None:
+ epoch = self.last_epoch + 1
+
+ self.last_epoch = epoch
+ self.model.apply(self.setter(self.lmbd(epoch)))
+
+ def state_dict(self):
+ return dict(last_epoch=self.last_epoch)
+
+ def load_state_dict(self, state):
+ self.last_epoch = state["last_epoch"]
+ self.step(self.last_epoch)
+
+
+lr_clip = 1e-5
+bnm_clip = 1e-2
+
+
+class PointNet2ClassificationSSG(pl.LightningModule):
+ def __init__(self, hparams):
+ super().__init__()
+
+ self.hparams = hparams
+
+ self._build_model()
+
+ def _build_model(self):
+ self.SA_modules = nn.ModuleList()
+ self.SA_modules.append(
+ PointnetSAModule(
+ npoint=512,
+ radius=0.2,
+ nsample=64,
+ mlp=[3, 64, 64, 128],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ self.SA_modules.append(
+ PointnetSAModule(
+ npoint=128,
+ radius=0.4,
+ nsample=64,
+ mlp=[128, 128, 128, 256],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ self.SA_modules.append(
+ PointnetSAModule(
+ mlp=[256, 256, 512, 1024], use_xyz=self.hparams["model.use_xyz"]
+ )
+ )
+
+ self.fc_layer = nn.Sequential(
+ nn.Linear(1024, 512, bias=False),
+ nn.BatchNorm1d(512),
+ nn.ReLU(True),
+ nn.Linear(512, 256, bias=False),
+ nn.BatchNorm1d(256),
+ nn.ReLU(True),
+ nn.Dropout(0.5),
+ nn.Linear(256, 40),
+ )
+
+ def _break_up_pc(self, pc):
+ xyz = pc[..., 0:3].contiguous()
+ features = pc[..., 3:].transpose(1, 2).contiguous() if pc.size(-1) > 3 else None
+
+ return xyz, features
+
+ def forward(self, pointcloud):
+ r"""
+ Forward pass of the network
+
+ Parameters
+ ----------
+ pointcloud: Variable(torch.cuda.FloatTensor)
+ (B, N, 3 + input_channels) tensor
+ Point cloud to run predicts on
+ Each point in the point-cloud MUST
+ be formated as (x, y, z, features...)
+ """
+ xyz, features = self._break_up_pc(pointcloud)
+
+ for module in self.SA_modules:
+ xyz, features = module(xyz, features)
+
+ return self.fc_layer(features.squeeze(-1))
+
+ def training_step(self, batch, batch_idx):
+ pc, labels = batch
+
+ logits = self.forward(pc)
+ loss = F.cross_entropy(logits, labels)
+ with torch.no_grad():
+ acc = (torch.argmax(logits, dim=1) == labels).float().mean()
+
+ log = dict(train_loss=loss, train_acc=acc)
+
+ return dict(loss=loss, log=log, progress_bar=dict(train_acc=acc))
+
+ def validation_step(self, batch, batch_idx):
+ pc, labels = batch
+
+ logits = self.forward(pc)
+ loss = F.cross_entropy(logits, labels)
+ acc = (torch.argmax(logits, dim=1) == labels).float().mean()
+
+ return dict(val_loss=loss, val_acc=acc)
+
+ def validation_end(self, outputs):
+ reduced_outputs = {}
+ for k in outputs[0]:
+ for o in outputs:
+ reduced_outputs[k] = reduced_outputs.get(k, []) + [o[k]]
+
+ for k in reduced_outputs:
+ reduced_outputs[k] = torch.stack(reduced_outputs[k]).mean()
+
+ reduced_outputs.update(
+ dict(log=reduced_outputs.copy(), progress_bar=reduced_outputs.copy())
+ )
+
+ return reduced_outputs
+
+ def configure_optimizers(self):
+ lr_lbmd = lambda _: max(
+ self.hparams["optimizer.lr_decay"]
+ ** (
+ int(
+ self.global_step
+ * self.hparams["batch_size"]
+ / self.hparams["optimizer.decay_step"]
+ )
+ ),
+ lr_clip / self.hparams["optimizer.lr"],
+ )
+ bn_lbmd = lambda _: max(
+ self.hparams["optimizer.bn_momentum"]
+ * self.hparams["optimizer.bnm_decay"]
+ ** (
+ int(
+ self.global_step
+ * self.hparams["batch_size"]
+ / self.hparams["optimizer.decay_step"]
+ )
+ ),
+ bnm_clip,
+ )
+
+ optimizer = torch.optim.Adam(
+ self.parameters(),
+ lr=self.hparams["optimizer.lr"],
+ weight_decay=self.hparams["optimizer.weight_decay"],
+ )
+ lr_scheduler = lr_sched.LambdaLR(optimizer, lr_lambda=lr_lbmd)
+ bnm_scheduler = BNMomentumScheduler(self, bn_lambda=bn_lbmd)
+
+ return [optimizer], [lr_scheduler, bnm_scheduler]
+
+ def prepare_data(self):
+ train_transforms = transforms.Compose(
+ [
+ d_utils.PointcloudToTensor(),
+ d_utils.PointcloudScale(),
+ d_utils.PointcloudRotate(),
+ d_utils.PointcloudRotatePerturbation(),
+ d_utils.PointcloudTranslate(),
+ d_utils.PointcloudJitter(),
+ d_utils.PointcloudRandomInputDropout(),
+ ]
+ )
+
+ self.train_dset = ModelNet40Cls(
+ self.hparams["num_points"], transforms=train_transforms, train=True
+ )
+ self.val_dset = ModelNet40Cls(
+ self.hparams["num_points"], transforms=None, train=False
+ )
+
+ def _build_dataloader(self, dset, mode):
+ return DataLoader(
+ dset,
+ batch_size=self.hparams["batch_size"],
+ shuffle=mode == "train",
+ num_workers=4,
+ pin_memory=True,
+ drop_last=mode == "train",
+ )
+
+ def train_dataloader(self):
+ return self._build_dataloader(self.train_dset, mode="train")
+
+ def val_dataloader(self):
+ return self._build_dataloader(self.val_dset, mode="val")
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_ssg_sem.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_ssg_sem.py
new file mode 100644
index 0000000..602be33
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/models/pointnet2_ssg_sem.py
@@ -0,0 +1,94 @@
+import pytorch_lightning as pl
+import torch
+import torch.nn as nn
+from pointnet2_ops.pointnet2_modules import PointnetFPModule, PointnetSAModule
+from torch.utils.data import DataLoader
+
+from pointnet2.data import Indoor3DSemSeg
+from pointnet2.models.pointnet2_ssg_cls import PointNet2ClassificationSSG
+
+
+class PointNet2SemSegSSG(PointNet2ClassificationSSG):
+ def _build_model(self):
+ self.SA_modules = nn.ModuleList()
+ self.SA_modules.append(
+ PointnetSAModule(
+ npoint=1024,
+ radius=0.1,
+ nsample=32,
+ mlp=[6, 32, 32, 64],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ self.SA_modules.append(
+ PointnetSAModule(
+ npoint=256,
+ radius=0.2,
+ nsample=32,
+ mlp=[64, 64, 64, 128],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ self.SA_modules.append(
+ PointnetSAModule(
+ npoint=64,
+ radius=0.4,
+ nsample=32,
+ mlp=[128, 128, 128, 256],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+ self.SA_modules.append(
+ PointnetSAModule(
+ npoint=16,
+ radius=0.8,
+ nsample=32,
+ mlp=[256, 256, 256, 512],
+ use_xyz=self.hparams["model.use_xyz"],
+ )
+ )
+
+ self.FP_modules = nn.ModuleList()
+ self.FP_modules.append(PointnetFPModule(mlp=[128 + 6, 128, 128, 128]))
+ self.FP_modules.append(PointnetFPModule(mlp=[256 + 64, 256, 128]))
+ self.FP_modules.append(PointnetFPModule(mlp=[256 + 128, 256, 256]))
+ self.FP_modules.append(PointnetFPModule(mlp=[512 + 256, 256, 256]))
+
+ self.fc_lyaer = nn.Sequential(
+ nn.Conv1d(128, 128, kernel_size=1, bias=False),
+ nn.BatchNorm1d(128),
+ nn.ReLU(True),
+ nn.Dropout(0.5),
+ nn.Conv1d(128, 13, kernel_size=1),
+ )
+
+ def forward(self, pointcloud):
+ r"""
+ Forward pass of the network
+
+ Parameters
+ ----------
+ pointcloud: Variable(torch.cuda.FloatTensor)
+ (B, N, 3 + input_channels) tensor
+ Point cloud to run predicts on
+ Each point in the point-cloud MUST
+ be formated as (x, y, z, features...)
+ """
+ xyz, features = self._break_up_pc(pointcloud)
+
+ l_xyz, l_features = [xyz], [features]
+ for i in range(len(self.SA_modules)):
+ li_xyz, li_features = self.SA_modules[i](l_xyz[i], l_features[i])
+ l_xyz.append(li_xyz)
+ l_features.append(li_features)
+
+ for i in range(-1, -(len(self.FP_modules) + 1), -1):
+ l_features[i - 1] = self.FP_modules[i](
+ l_xyz[i - 1], l_xyz[i], l_features[i - 1], l_features[i]
+ )
+
+ return self.fc_lyaer(l_features[0])
+
+ def prepare_data(self):
+ self.train_dset = Indoor3DSemSeg(self.hparams["num_points"], train=True)
+ self.val_dset = Indoor3DSemSeg(self.hparams["num_points"], train=False)
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/train.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/train.py
new file mode 100644
index 0000000..c980c1f
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/train.py
@@ -0,0 +1,55 @@
+import os
+
+import hydra
+import omegaconf
+import pytorch_lightning as pl
+import torch
+from pytorch_lightning.loggers import TensorBoardLogger
+
+torch.backends.cudnn.enabled = True
+torch.backends.cudnn.benchmark = True
+
+
+def hydra_params_to_dotdict(hparams):
+ def _to_dot_dict(cfg):
+ res = {}
+ for k, v in cfg.items():
+ if isinstance(v, omegaconf.DictConfig):
+ res.update(
+ {k + "." + subk: subv for subk, subv in _to_dot_dict(v).items()}
+ )
+ elif isinstance(v, (str, int, float, bool)):
+ res[k] = v
+
+ return res
+
+ return _to_dot_dict(hparams)
+
+
+@hydra.main("config/config.yaml")
+def main(cfg):
+ model = hydra.utils.instantiate(cfg.task_model, hydra_params_to_dotdict(cfg))
+
+ early_stop_callback = pl.callbacks.EarlyStopping(patience=5)
+ checkpoint_callback = pl.callbacks.ModelCheckpoint(
+ monitor="val_acc",
+ mode="max",
+ save_top_k=2,
+ filepath=os.path.join(
+ cfg.task_model.name, "{epoch}-{val_loss:.2f}-{val_acc:.3f}"
+ ),
+ verbose=True,
+ )
+ trainer = pl.Trainer(
+ gpus=list(cfg.gpus),
+ max_epochs=cfg.epochs,
+ early_stop_callback=early_stop_callback,
+ checkpoint_callback=checkpoint_callback,
+ distributed_backend=cfg.distrib_backend,
+ )
+
+ trainer.fit(model)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2/utils/.gitignore b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/utils/.gitignore
new file mode 100644
index 0000000..25bd00c
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2/utils/.gitignore
@@ -0,0 +1,2 @@
+build
+_ext
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/MANIFEST.in b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/MANIFEST.in
new file mode 100644
index 0000000..a4eb5de
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/MANIFEST.in
@@ -0,0 +1 @@
+graft pointnet2_ops/_ext-src
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/__init__.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/__init__.py
new file mode 100644
index 0000000..5fd361f
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/__init__.py
@@ -0,0 +1,3 @@
+import pointnet2_ops.pointnet2_modules
+import pointnet2_ops.pointnet2_utils
+from pointnet2_ops._version import __version__
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/ball_query.h b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/ball_query.h
new file mode 100644
index 0000000..1bbc638
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/ball_query.h
@@ -0,0 +1,5 @@
+#pragma once
+#include
+
+at::Tensor ball_query(at::Tensor new_xyz, at::Tensor xyz, const float radius,
+ const int nsample);
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/cuda_utils.h b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/cuda_utils.h
new file mode 100644
index 0000000..0fd5b6e
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/cuda_utils.h
@@ -0,0 +1,41 @@
+#ifndef _CUDA_UTILS_H
+#define _CUDA_UTILS_H
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#define TOTAL_THREADS 512
+
+inline int opt_n_threads(int work_size) {
+ const int pow_2 = std::log(static_cast(work_size)) / std::log(2.0);
+
+ return max(min(1 << pow_2, TOTAL_THREADS), 1);
+}
+
+inline dim3 opt_block_config(int x, int y) {
+ const int x_threads = opt_n_threads(x);
+ const int y_threads =
+ max(min(opt_n_threads(y), TOTAL_THREADS / x_threads), 1);
+ dim3 block_config(x_threads, y_threads, 1);
+
+ return block_config;
+}
+
+#define CUDA_CHECK_ERRORS() \
+ do { \
+ cudaError_t err = cudaGetLastError(); \
+ if (cudaSuccess != err) { \
+ fprintf(stderr, "CUDA kernel failed : %s\n%s at L:%d in %s\n", \
+ cudaGetErrorString(err), __PRETTY_FUNCTION__, __LINE__, \
+ __FILE__); \
+ exit(-1); \
+ } \
+ } while (0)
+
+#endif
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/group_points.h b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/group_points.h
new file mode 100644
index 0000000..ad20cda
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/group_points.h
@@ -0,0 +1,5 @@
+#pragma once
+#include
+
+at::Tensor group_points(at::Tensor points, at::Tensor idx);
+at::Tensor group_points_grad(at::Tensor grad_out, at::Tensor idx, const int n);
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/interpolate.h b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/interpolate.h
new file mode 100644
index 0000000..26b3464
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/interpolate.h
@@ -0,0 +1,10 @@
+#pragma once
+
+#include
+#include
+
+std::vector three_nn(at::Tensor unknowns, at::Tensor knows);
+at::Tensor three_interpolate(at::Tensor points, at::Tensor idx,
+ at::Tensor weight);
+at::Tensor three_interpolate_grad(at::Tensor grad_out, at::Tensor idx,
+ at::Tensor weight, const int m);
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/sampling.h b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/sampling.h
new file mode 100644
index 0000000..d795271
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/sampling.h
@@ -0,0 +1,6 @@
+#pragma once
+#include
+
+at::Tensor gather_points(at::Tensor points, at::Tensor idx);
+at::Tensor gather_points_grad(at::Tensor grad_out, at::Tensor idx, const int n);
+at::Tensor furthest_point_sampling(at::Tensor points, const int nsamples);
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/utils.h b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/utils.h
new file mode 100644
index 0000000..5f080ed
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/utils.h
@@ -0,0 +1,25 @@
+#pragma once
+#include
+#include
+
+#define CHECK_CUDA(x) \
+ do { \
+ AT_ASSERT(x.is_cuda(), #x " must be a CUDA tensor"); \
+ } while (0)
+
+#define CHECK_CONTIGUOUS(x) \
+ do { \
+ AT_ASSERT(x.is_contiguous(), #x " must be a contiguous tensor"); \
+ } while (0)
+
+#define CHECK_IS_INT(x) \
+ do { \
+ AT_ASSERT(x.scalar_type() == at::ScalarType::Int, \
+ #x " must be an int tensor"); \
+ } while (0)
+
+#define CHECK_IS_FLOAT(x) \
+ do { \
+ AT_ASSERT(x.scalar_type() == at::ScalarType::Float, \
+ #x " must be a float tensor"); \
+ } while (0)
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query.cpp b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query.cpp
new file mode 100644
index 0000000..b1797c1
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query.cpp
@@ -0,0 +1,32 @@
+#include "ball_query.h"
+#include "utils.h"
+
+void query_ball_point_kernel_wrapper(int b, int n, int m, float radius,
+ int nsample, const float *new_xyz,
+ const float *xyz, int *idx);
+
+at::Tensor ball_query(at::Tensor new_xyz, at::Tensor xyz, const float radius,
+ const int nsample) {
+ CHECK_CONTIGUOUS(new_xyz);
+ CHECK_CONTIGUOUS(xyz);
+ CHECK_IS_FLOAT(new_xyz);
+ CHECK_IS_FLOAT(xyz);
+
+ if (new_xyz.is_cuda()) {
+ CHECK_CUDA(xyz);
+ }
+
+ at::Tensor idx =
+ torch::zeros({new_xyz.size(0), new_xyz.size(1), nsample},
+ at::device(new_xyz.device()).dtype(at::ScalarType::Int));
+
+ if (new_xyz.is_cuda()) {
+ query_ball_point_kernel_wrapper(xyz.size(0), xyz.size(1), new_xyz.size(1),
+ radius, nsample, new_xyz.data_ptr(),
+ xyz.data_ptr(), idx.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return idx;
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query_gpu.cu b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query_gpu.cu
new file mode 100644
index 0000000..559aef9
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query_gpu.cu
@@ -0,0 +1,54 @@
+#include
+#include
+#include
+
+#include "cuda_utils.h"
+
+// input: new_xyz(b, m, 3) xyz(b, n, 3)
+// output: idx(b, m, nsample)
+__global__ void query_ball_point_kernel(int b, int n, int m, float radius,
+ int nsample,
+ const float *__restrict__ new_xyz,
+ const float *__restrict__ xyz,
+ int *__restrict__ idx) {
+ int batch_index = blockIdx.x;
+ xyz += batch_index * n * 3;
+ new_xyz += batch_index * m * 3;
+ idx += m * nsample * batch_index;
+
+ int index = threadIdx.x;
+ int stride = blockDim.x;
+
+ float radius2 = radius * radius;
+ for (int j = index; j < m; j += stride) {
+ float new_x = new_xyz[j * 3 + 0];
+ float new_y = new_xyz[j * 3 + 1];
+ float new_z = new_xyz[j * 3 + 2];
+ for (int k = 0, cnt = 0; k < n && cnt < nsample; ++k) {
+ float x = xyz[k * 3 + 0];
+ float y = xyz[k * 3 + 1];
+ float z = xyz[k * 3 + 2];
+ float d2 = (new_x - x) * (new_x - x) + (new_y - y) * (new_y - y) +
+ (new_z - z) * (new_z - z);
+ if (d2 < radius2) {
+ if (cnt == 0) {
+ for (int l = 0; l < nsample; ++l) {
+ idx[j * nsample + l] = k;
+ }
+ }
+ idx[j * nsample + cnt] = k;
+ ++cnt;
+ }
+ }
+ }
+}
+
+void query_ball_point_kernel_wrapper(int b, int n, int m, float radius,
+ int nsample, const float *new_xyz,
+ const float *xyz, int *idx) {
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+ query_ball_point_kernel<<>>(
+ b, n, m, radius, nsample, new_xyz, xyz, idx);
+
+ CUDA_CHECK_ERRORS();
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/bindings.cpp b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/bindings.cpp
new file mode 100644
index 0000000..d1916ce
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/bindings.cpp
@@ -0,0 +1,19 @@
+#include "ball_query.h"
+#include "group_points.h"
+#include "interpolate.h"
+#include "sampling.h"
+
+PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
+ m.def("gather_points", &gather_points);
+ m.def("gather_points_grad", &gather_points_grad);
+ m.def("furthest_point_sampling", &furthest_point_sampling);
+
+ m.def("three_nn", &three_nn);
+ m.def("three_interpolate", &three_interpolate);
+ m.def("three_interpolate_grad", &three_interpolate_grad);
+
+ m.def("ball_query", &ball_query);
+
+ m.def("group_points", &group_points);
+ m.def("group_points_grad", &group_points_grad);
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/group_points.cpp b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/group_points.cpp
new file mode 100644
index 0000000..285a4bd
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/group_points.cpp
@@ -0,0 +1,62 @@
+#include "group_points.h"
+#include "utils.h"
+
+void group_points_kernel_wrapper(int b, int c, int n, int npoints, int nsample,
+ const float *points, const int *idx,
+ float *out);
+
+void group_points_grad_kernel_wrapper(int b, int c, int n, int npoints,
+ int nsample, const float *grad_out,
+ const int *idx, float *grad_points);
+
+at::Tensor group_points(at::Tensor points, at::Tensor idx) {
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(idx);
+ CHECK_IS_FLOAT(points);
+ CHECK_IS_INT(idx);
+
+ if (points.is_cuda()) {
+ CHECK_CUDA(idx);
+ }
+
+ at::Tensor output =
+ torch::zeros({points.size(0), points.size(1), idx.size(1), idx.size(2)},
+ at::device(points.device()).dtype(at::ScalarType::Float));
+
+ if (points.is_cuda()) {
+ group_points_kernel_wrapper(points.size(0), points.size(1), points.size(2),
+ idx.size(1), idx.size(2),
+ points.data_ptr(), idx.data_ptr(),
+ output.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return output;
+}
+
+at::Tensor group_points_grad(at::Tensor grad_out, at::Tensor idx, const int n) {
+ CHECK_CONTIGUOUS(grad_out);
+ CHECK_CONTIGUOUS(idx);
+ CHECK_IS_FLOAT(grad_out);
+ CHECK_IS_INT(idx);
+
+ if (grad_out.is_cuda()) {
+ CHECK_CUDA(idx);
+ }
+
+ at::Tensor output =
+ torch::zeros({grad_out.size(0), grad_out.size(1), n},
+ at::device(grad_out.device()).dtype(at::ScalarType::Float));
+
+ if (grad_out.is_cuda()) {
+ group_points_grad_kernel_wrapper(
+ grad_out.size(0), grad_out.size(1), n, idx.size(1), idx.size(2),
+ grad_out.data_ptr(), idx.data_ptr(),
+ output.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return output;
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/group_points_gpu.cu b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/group_points_gpu.cu
new file mode 100644
index 0000000..57c2b1b
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/group_points_gpu.cu
@@ -0,0 +1,75 @@
+#include
+#include
+
+#include "cuda_utils.h"
+
+// input: points(b, c, n) idx(b, npoints, nsample)
+// output: out(b, c, npoints, nsample)
+__global__ void group_points_kernel(int b, int c, int n, int npoints,
+ int nsample,
+ const float *__restrict__ points,
+ const int *__restrict__ idx,
+ float *__restrict__ out) {
+ int batch_index = blockIdx.x;
+ points += batch_index * n * c;
+ idx += batch_index * npoints * nsample;
+ out += batch_index * npoints * nsample * c;
+
+ const int index = threadIdx.y * blockDim.x + threadIdx.x;
+ const int stride = blockDim.y * blockDim.x;
+ for (int i = index; i < c * npoints; i += stride) {
+ const int l = i / npoints;
+ const int j = i % npoints;
+ for (int k = 0; k < nsample; ++k) {
+ int ii = idx[j * nsample + k];
+ out[(l * npoints + j) * nsample + k] = points[l * n + ii];
+ }
+ }
+}
+
+void group_points_kernel_wrapper(int b, int c, int n, int npoints, int nsample,
+ const float *points, const int *idx,
+ float *out) {
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+
+ group_points_kernel<<>>(
+ b, c, n, npoints, nsample, points, idx, out);
+
+ CUDA_CHECK_ERRORS();
+}
+
+// input: grad_out(b, c, npoints, nsample), idx(b, npoints, nsample)
+// output: grad_points(b, c, n)
+__global__ void group_points_grad_kernel(int b, int c, int n, int npoints,
+ int nsample,
+ const float *__restrict__ grad_out,
+ const int *__restrict__ idx,
+ float *__restrict__ grad_points) {
+ int batch_index = blockIdx.x;
+ grad_out += batch_index * npoints * nsample * c;
+ idx += batch_index * npoints * nsample;
+ grad_points += batch_index * n * c;
+
+ const int index = threadIdx.y * blockDim.x + threadIdx.x;
+ const int stride = blockDim.y * blockDim.x;
+ for (int i = index; i < c * npoints; i += stride) {
+ const int l = i / npoints;
+ const int j = i % npoints;
+ for (int k = 0; k < nsample; ++k) {
+ int ii = idx[j * nsample + k];
+ atomicAdd(grad_points + l * n + ii,
+ grad_out[(l * npoints + j) * nsample + k]);
+ }
+ }
+}
+
+void group_points_grad_kernel_wrapper(int b, int c, int n, int npoints,
+ int nsample, const float *grad_out,
+ const int *idx, float *grad_points) {
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+
+ group_points_grad_kernel<<>>(
+ b, c, n, npoints, nsample, grad_out, idx, grad_points);
+
+ CUDA_CHECK_ERRORS();
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/interpolate.cpp b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/interpolate.cpp
new file mode 100644
index 0000000..cdee31c
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/interpolate.cpp
@@ -0,0 +1,99 @@
+#include "interpolate.h"
+#include "utils.h"
+
+void three_nn_kernel_wrapper(int b, int n, int m, const float *unknown,
+ const float *known, float *dist2, int *idx);
+void three_interpolate_kernel_wrapper(int b, int c, int m, int n,
+ const float *points, const int *idx,
+ const float *weight, float *out);
+void three_interpolate_grad_kernel_wrapper(int b, int c, int n, int m,
+ const float *grad_out,
+ const int *idx, const float *weight,
+ float *grad_points);
+
+std::vector three_nn(at::Tensor unknowns, at::Tensor knows) {
+ CHECK_CONTIGUOUS(unknowns);
+ CHECK_CONTIGUOUS(knows);
+ CHECK_IS_FLOAT(unknowns);
+ CHECK_IS_FLOAT(knows);
+
+ if (unknowns.is_cuda()) {
+ CHECK_CUDA(knows);
+ }
+
+ at::Tensor idx =
+ torch::zeros({unknowns.size(0), unknowns.size(1), 3},
+ at::device(unknowns.device()).dtype(at::ScalarType::Int));
+ at::Tensor dist2 =
+ torch::zeros({unknowns.size(0), unknowns.size(1), 3},
+ at::device(unknowns.device()).dtype(at::ScalarType::Float));
+
+ if (unknowns.is_cuda()) {
+ three_nn_kernel_wrapper(unknowns.size(0), unknowns.size(1), knows.size(1),
+ unknowns.data_ptr(), knows.data_ptr(),
+ dist2.data_ptr(), idx.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return {dist2, idx};
+}
+
+at::Tensor three_interpolate(at::Tensor points, at::Tensor idx,
+ at::Tensor weight) {
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(idx);
+ CHECK_CONTIGUOUS(weight);
+ CHECK_IS_FLOAT(points);
+ CHECK_IS_INT(idx);
+ CHECK_IS_FLOAT(weight);
+
+ if (points.is_cuda()) {
+ CHECK_CUDA(idx);
+ CHECK_CUDA(weight);
+ }
+
+ at::Tensor output =
+ torch::zeros({points.size(0), points.size(1), idx.size(1)},
+ at::device(points.device()).dtype(at::ScalarType::Float));
+
+ if (points.is_cuda()) {
+ three_interpolate_kernel_wrapper(
+ points.size(0), points.size(1), points.size(2), idx.size(1),
+ points.data_ptr(), idx.data_ptr(), weight.data_ptr(),
+ output.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return output;
+}
+at::Tensor three_interpolate_grad(at::Tensor grad_out, at::Tensor idx,
+ at::Tensor weight, const int m) {
+ CHECK_CONTIGUOUS(grad_out);
+ CHECK_CONTIGUOUS(idx);
+ CHECK_CONTIGUOUS(weight);
+ CHECK_IS_FLOAT(grad_out);
+ CHECK_IS_INT(idx);
+ CHECK_IS_FLOAT(weight);
+
+ if (grad_out.is_cuda()) {
+ CHECK_CUDA(idx);
+ CHECK_CUDA(weight);
+ }
+
+ at::Tensor output =
+ torch::zeros({grad_out.size(0), grad_out.size(1), m},
+ at::device(grad_out.device()).dtype(at::ScalarType::Float));
+
+ if (grad_out.is_cuda()) {
+ three_interpolate_grad_kernel_wrapper(
+ grad_out.size(0), grad_out.size(1), grad_out.size(2), m,
+ grad_out.data_ptr(), idx.data_ptr(),
+ weight.data_ptr(), output.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return output;
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/interpolate_gpu.cu b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/interpolate_gpu.cu
new file mode 100644
index 0000000..81c5548
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/interpolate_gpu.cu
@@ -0,0 +1,154 @@
+#include
+#include
+#include
+
+#include "cuda_utils.h"
+
+// input: unknown(b, n, 3) known(b, m, 3)
+// output: dist2(b, n, 3), idx(b, n, 3)
+__global__ void three_nn_kernel(int b, int n, int m,
+ const float *__restrict__ unknown,
+ const float *__restrict__ known,
+ float *__restrict__ dist2,
+ int *__restrict__ idx) {
+ int batch_index = blockIdx.x;
+ unknown += batch_index * n * 3;
+ known += batch_index * m * 3;
+ dist2 += batch_index * n * 3;
+ idx += batch_index * n * 3;
+
+ int index = threadIdx.x;
+ int stride = blockDim.x;
+ for (int j = index; j < n; j += stride) {
+ float ux = unknown[j * 3 + 0];
+ float uy = unknown[j * 3 + 1];
+ float uz = unknown[j * 3 + 2];
+
+ double best1 = 1e40, best2 = 1e40, best3 = 1e40;
+ int besti1 = 0, besti2 = 0, besti3 = 0;
+ for (int k = 0; k < m; ++k) {
+ float x = known[k * 3 + 0];
+ float y = known[k * 3 + 1];
+ float z = known[k * 3 + 2];
+ float d = (ux - x) * (ux - x) + (uy - y) * (uy - y) + (uz - z) * (uz - z);
+ if (d < best1) {
+ best3 = best2;
+ besti3 = besti2;
+ best2 = best1;
+ besti2 = besti1;
+ best1 = d;
+ besti1 = k;
+ } else if (d < best2) {
+ best3 = best2;
+ besti3 = besti2;
+ best2 = d;
+ besti2 = k;
+ } else if (d < best3) {
+ best3 = d;
+ besti3 = k;
+ }
+ }
+ dist2[j * 3 + 0] = best1;
+ dist2[j * 3 + 1] = best2;
+ dist2[j * 3 + 2] = best3;
+
+ idx[j * 3 + 0] = besti1;
+ idx[j * 3 + 1] = besti2;
+ idx[j * 3 + 2] = besti3;
+ }
+}
+
+void three_nn_kernel_wrapper(int b, int n, int m, const float *unknown,
+ const float *known, float *dist2, int *idx) {
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+ three_nn_kernel<<>>(b, n, m, unknown, known,
+ dist2, idx);
+
+ CUDA_CHECK_ERRORS();
+}
+
+// input: points(b, c, m), idx(b, n, 3), weight(b, n, 3)
+// output: out(b, c, n)
+__global__ void three_interpolate_kernel(int b, int c, int m, int n,
+ const float *__restrict__ points,
+ const int *__restrict__ idx,
+ const float *__restrict__ weight,
+ float *__restrict__ out) {
+ int batch_index = blockIdx.x;
+ points += batch_index * m * c;
+
+ idx += batch_index * n * 3;
+ weight += batch_index * n * 3;
+
+ out += batch_index * n * c;
+
+ const int index = threadIdx.y * blockDim.x + threadIdx.x;
+ const int stride = blockDim.y * blockDim.x;
+ for (int i = index; i < c * n; i += stride) {
+ const int l = i / n;
+ const int j = i % n;
+ float w1 = weight[j * 3 + 0];
+ float w2 = weight[j * 3 + 1];
+ float w3 = weight[j * 3 + 2];
+
+ int i1 = idx[j * 3 + 0];
+ int i2 = idx[j * 3 + 1];
+ int i3 = idx[j * 3 + 2];
+
+ out[i] = points[l * m + i1] * w1 + points[l * m + i2] * w2 +
+ points[l * m + i3] * w3;
+ }
+}
+
+void three_interpolate_kernel_wrapper(int b, int c, int m, int n,
+ const float *points, const int *idx,
+ const float *weight, float *out) {
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+ three_interpolate_kernel<<>>(
+ b, c, m, n, points, idx, weight, out);
+
+ CUDA_CHECK_ERRORS();
+}
+
+// input: grad_out(b, c, n), idx(b, n, 3), weight(b, n, 3)
+// output: grad_points(b, c, m)
+
+__global__ void three_interpolate_grad_kernel(
+ int b, int c, int n, int m, const float *__restrict__ grad_out,
+ const int *__restrict__ idx, const float *__restrict__ weight,
+ float *__restrict__ grad_points) {
+ int batch_index = blockIdx.x;
+ grad_out += batch_index * n * c;
+ idx += batch_index * n * 3;
+ weight += batch_index * n * 3;
+ grad_points += batch_index * m * c;
+
+ const int index = threadIdx.y * blockDim.x + threadIdx.x;
+ const int stride = blockDim.y * blockDim.x;
+ for (int i = index; i < c * n; i += stride) {
+ const int l = i / n;
+ const int j = i % n;
+ float w1 = weight[j * 3 + 0];
+ float w2 = weight[j * 3 + 1];
+ float w3 = weight[j * 3 + 2];
+
+ int i1 = idx[j * 3 + 0];
+ int i2 = idx[j * 3 + 1];
+ int i3 = idx[j * 3 + 2];
+
+ atomicAdd(grad_points + l * m + i1, grad_out[i] * w1);
+ atomicAdd(grad_points + l * m + i2, grad_out[i] * w2);
+ atomicAdd(grad_points + l * m + i3, grad_out[i] * w3);
+ }
+}
+
+void three_interpolate_grad_kernel_wrapper(int b, int c, int n, int m,
+ const float *grad_out,
+ const int *idx, const float *weight,
+ float *grad_points) {
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+ three_interpolate_grad_kernel<<>>(
+ b, c, n, m, grad_out, idx, weight, grad_points);
+
+ CUDA_CHECK_ERRORS();
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling.cpp b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling.cpp
new file mode 100644
index 0000000..ddbdc11
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling.cpp
@@ -0,0 +1,87 @@
+#include "sampling.h"
+#include "utils.h"
+
+void gather_points_kernel_wrapper(int b, int c, int n, int npoints,
+ const float *points, const int *idx,
+ float *out);
+void gather_points_grad_kernel_wrapper(int b, int c, int n, int npoints,
+ const float *grad_out, const int *idx,
+ float *grad_points);
+
+void furthest_point_sampling_kernel_wrapper(int b, int n, int m,
+ const float *dataset, float *temp,
+ int *idxs);
+
+at::Tensor gather_points(at::Tensor points, at::Tensor idx) {
+ CHECK_CONTIGUOUS(points);
+ CHECK_CONTIGUOUS(idx);
+ CHECK_IS_FLOAT(points);
+ CHECK_IS_INT(idx);
+
+ if (points.is_cuda()) {
+ CHECK_CUDA(idx);
+ }
+
+ at::Tensor output =
+ torch::zeros({points.size(0), points.size(1), idx.size(1)},
+ at::device(points.device()).dtype(at::ScalarType::Float));
+
+ if (points.is_cuda()) {
+ gather_points_kernel_wrapper(points.size(0), points.size(1), points.size(2),
+ idx.size(1), points.data_ptr(),
+ idx.data_ptr(), output.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return output;
+}
+
+at::Tensor gather_points_grad(at::Tensor grad_out, at::Tensor idx,
+ const int n) {
+ CHECK_CONTIGUOUS(grad_out);
+ CHECK_CONTIGUOUS(idx);
+ CHECK_IS_FLOAT(grad_out);
+ CHECK_IS_INT(idx);
+
+ if (grad_out.is_cuda()) {
+ CHECK_CUDA(idx);
+ }
+
+ at::Tensor output =
+ torch::zeros({grad_out.size(0), grad_out.size(1), n},
+ at::device(grad_out.device()).dtype(at::ScalarType::Float));
+
+ if (grad_out.is_cuda()) {
+ gather_points_grad_kernel_wrapper(grad_out.size(0), grad_out.size(1), n,
+ idx.size(1), grad_out.data_ptr(),
+ idx.data_ptr(),
+ output.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return output;
+}
+at::Tensor furthest_point_sampling(at::Tensor points, const int nsamples) {
+ CHECK_CONTIGUOUS(points);
+ CHECK_IS_FLOAT(points);
+
+ at::Tensor output =
+ torch::zeros({points.size(0), nsamples},
+ at::device(points.device()).dtype(at::ScalarType::Int));
+
+ at::Tensor tmp =
+ torch::full({points.size(0), points.size(1)}, 1e10,
+ at::device(points.device()).dtype(at::ScalarType::Float));
+
+ if (points.is_cuda()) {
+ furthest_point_sampling_kernel_wrapper(
+ points.size(0), points.size(1), nsamples, points.data_ptr(),
+ tmp.data_ptr(), output.data_ptr());
+ } else {
+ AT_ASSERT(false, "CPU not supported");
+ }
+
+ return output;
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling_gpu.cu b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling_gpu.cu
new file mode 100644
index 0000000..877f979
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling_gpu.cu
@@ -0,0 +1,229 @@
+#include
+#include
+
+#include "cuda_utils.h"
+
+// input: points(b, c, n) idx(b, m)
+// output: out(b, c, m)
+__global__ void gather_points_kernel(int b, int c, int n, int m,
+ const float *__restrict__ points,
+ const int *__restrict__ idx,
+ float *__restrict__ out) {
+ for (int i = blockIdx.x; i < b; i += gridDim.x) {
+ for (int l = blockIdx.y; l < c; l += gridDim.y) {
+ for (int j = threadIdx.x; j < m; j += blockDim.x) {
+ int a = idx[i * m + j];
+ out[(i * c + l) * m + j] = points[(i * c + l) * n + a];
+ }
+ }
+ }
+}
+
+void gather_points_kernel_wrapper(int b, int c, int n, int npoints,
+ const float *points, const int *idx,
+ float *out) {
+ gather_points_kernel<<>>(b, c, n, npoints,
+ points, idx, out);
+
+ CUDA_CHECK_ERRORS();
+}
+
+// input: grad_out(b, c, m) idx(b, m)
+// output: grad_points(b, c, n)
+__global__ void gather_points_grad_kernel(int b, int c, int n, int m,
+ const float *__restrict__ grad_out,
+ const int *__restrict__ idx,
+ float *__restrict__ grad_points) {
+ for (int i = blockIdx.x; i < b; i += gridDim.x) {
+ for (int l = blockIdx.y; l < c; l += gridDim.y) {
+ for (int j = threadIdx.x; j < m; j += blockDim.x) {
+ int a = idx[i * m + j];
+ atomicAdd(grad_points + (i * c + l) * n + a,
+ grad_out[(i * c + l) * m + j]);
+ }
+ }
+ }
+}
+
+void gather_points_grad_kernel_wrapper(int b, int c, int n, int npoints,
+ const float *grad_out, const int *idx,
+ float *grad_points) {
+ gather_points_grad_kernel<<>>(
+ b, c, n, npoints, grad_out, idx, grad_points);
+
+ CUDA_CHECK_ERRORS();
+}
+
+__device__ void __update(float *__restrict__ dists, int *__restrict__ dists_i,
+ int idx1, int idx2) {
+ const float v1 = dists[idx1], v2 = dists[idx2];
+ const int i1 = dists_i[idx1], i2 = dists_i[idx2];
+ dists[idx1] = max(v1, v2);
+ dists_i[idx1] = v2 > v1 ? i2 : i1;
+}
+
+// Input dataset: (b, n, 3), tmp: (b, n)
+// Ouput idxs (b, m)
+template
+__global__ void furthest_point_sampling_kernel(
+ int b, int n, int m, const float *__restrict__ dataset,
+ float *__restrict__ temp, int *__restrict__ idxs) {
+ if (m <= 0) return;
+ __shared__ float dists[block_size];
+ __shared__ int dists_i[block_size];
+
+ int batch_index = blockIdx.x;
+ dataset += batch_index * n * 3;
+ temp += batch_index * n;
+ idxs += batch_index * m;
+
+ int tid = threadIdx.x;
+ const int stride = block_size;
+
+ int old = 0;
+ if (threadIdx.x == 0) idxs[0] = old;
+
+ __syncthreads();
+ for (int j = 1; j < m; j++) {
+ int besti = 0;
+ float best = -1;
+ float x1 = dataset[old * 3 + 0];
+ float y1 = dataset[old * 3 + 1];
+ float z1 = dataset[old * 3 + 2];
+ for (int k = tid; k < n; k += stride) {
+ float x2, y2, z2;
+ x2 = dataset[k * 3 + 0];
+ y2 = dataset[k * 3 + 1];
+ z2 = dataset[k * 3 + 2];
+// float mag = (x2 * x2) + (y2 * y2) + (z2 * z2);
+// if (mag <= 1e-3) continue;
+
+ float d =
+ (x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) + (z2 - z1) * (z2 - z1);
+
+ float d2 = min(d, temp[k]);
+ temp[k] = d2;
+ besti = d2 > best ? k : besti;
+ best = d2 > best ? d2 : best;
+ }
+ dists[tid] = best;
+ dists_i[tid] = besti;
+ __syncthreads();
+
+ if (block_size >= 512) {
+ if (tid < 256) {
+ __update(dists, dists_i, tid, tid + 256);
+ }
+ __syncthreads();
+ }
+ if (block_size >= 256) {
+ if (tid < 128) {
+ __update(dists, dists_i, tid, tid + 128);
+ }
+ __syncthreads();
+ }
+ if (block_size >= 128) {
+ if (tid < 64) {
+ __update(dists, dists_i, tid, tid + 64);
+ }
+ __syncthreads();
+ }
+ if (block_size >= 64) {
+ if (tid < 32) {
+ __update(dists, dists_i, tid, tid + 32);
+ }
+ __syncthreads();
+ }
+ if (block_size >= 32) {
+ if (tid < 16) {
+ __update(dists, dists_i, tid, tid + 16);
+ }
+ __syncthreads();
+ }
+ if (block_size >= 16) {
+ if (tid < 8) {
+ __update(dists, dists_i, tid, tid + 8);
+ }
+ __syncthreads();
+ }
+ if (block_size >= 8) {
+ if (tid < 4) {
+ __update(dists, dists_i, tid, tid + 4);
+ }
+ __syncthreads();
+ }
+ if (block_size >= 4) {
+ if (tid < 2) {
+ __update(dists, dists_i, tid, tid + 2);
+ }
+ __syncthreads();
+ }
+ if (block_size >= 2) {
+ if (tid < 1) {
+ __update(dists, dists_i, tid, tid + 1);
+ }
+ __syncthreads();
+ }
+
+ old = dists_i[0];
+ if (tid == 0) idxs[j] = old;
+ }
+}
+
+void furthest_point_sampling_kernel_wrapper(int b, int n, int m,
+ const float *dataset, float *temp,
+ int *idxs) {
+ unsigned int n_threads = opt_n_threads(n);
+
+ cudaStream_t stream = at::cuda::getCurrentCUDAStream();
+
+ switch (n_threads) {
+ case 512:
+ furthest_point_sampling_kernel<512>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 256:
+ furthest_point_sampling_kernel<256>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 128:
+ furthest_point_sampling_kernel<128>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 64:
+ furthest_point_sampling_kernel<64>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 32:
+ furthest_point_sampling_kernel<32>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 16:
+ furthest_point_sampling_kernel<16>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 8:
+ furthest_point_sampling_kernel<8>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 4:
+ furthest_point_sampling_kernel<4>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 2:
+ furthest_point_sampling_kernel<2>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ case 1:
+ furthest_point_sampling_kernel<1>
+ <<>>(b, n, m, dataset, temp, idxs);
+ break;
+ default:
+ furthest_point_sampling_kernel<512>
+ <<>>(b, n, m, dataset, temp, idxs);
+ }
+
+ CUDA_CHECK_ERRORS();
+}
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_version.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_version.py
new file mode 100644
index 0000000..528787c
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/_version.py
@@ -0,0 +1 @@
+__version__ = "3.0.0"
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/pointnet2_modules.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/pointnet2_modules.py
new file mode 100644
index 0000000..a0ad4f6
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/pointnet2_modules.py
@@ -0,0 +1,209 @@
+from typing import List, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from pointnet2_ops import pointnet2_utils
+
+
+def build_shared_mlp(mlp_spec: List[int], bn: bool = True):
+ layers = []
+ for i in range(1, len(mlp_spec)):
+ layers.append(
+ nn.Conv2d(mlp_spec[i - 1], mlp_spec[i], kernel_size=1, bias=not bn)
+ )
+ if bn:
+ layers.append(nn.BatchNorm2d(mlp_spec[i]))
+ layers.append(nn.ReLU(True))
+
+ return nn.Sequential(*layers)
+
+
+class _PointnetSAModuleBase(nn.Module):
+ def __init__(self):
+ super(_PointnetSAModuleBase, self).__init__()
+ self.npoint = None
+ self.groupers = None
+ self.mlps = None
+
+ def forward(
+ self, xyz: torch.Tensor, features: Optional[torch.Tensor]
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Parameters
+ ----------
+ xyz : torch.Tensor
+ (B, N, 3) tensor of the xyz coordinates of the features
+ features : torch.Tensor
+ (B, C, N) tensor of the descriptors of the the features
+
+ Returns
+ -------
+ new_xyz : torch.Tensor
+ (B, npoint, 3) tensor of the new features' xyz
+ new_features : torch.Tensor
+ (B, \sum_k(mlps[k][-1]), npoint) tensor of the new_features descriptors
+ """
+
+ new_features_list = []
+
+ xyz_flipped = xyz.transpose(1, 2).contiguous()
+ new_xyz = (
+ pointnet2_utils.gather_operation(
+ xyz_flipped, pointnet2_utils.furthest_point_sample(xyz, self.npoint)
+ )
+ .transpose(1, 2)
+ .contiguous()
+ if self.npoint is not None
+ else None
+ )
+
+ for i in range(len(self.groupers)):
+ new_features = self.groupers[i](
+ xyz, new_xyz, features
+ ) # (B, C, npoint, nsample)
+
+ new_features = self.mlps[i](new_features) # (B, mlp[-1], npoint, nsample)
+ new_features = F.max_pool2d(
+ new_features, kernel_size=[1, new_features.size(3)]
+ ) # (B, mlp[-1], npoint, 1)
+ new_features = new_features.squeeze(-1) # (B, mlp[-1], npoint)
+
+ new_features_list.append(new_features)
+
+ return new_xyz, torch.cat(new_features_list, dim=1)
+
+
+class PointnetSAModuleMSG(_PointnetSAModuleBase):
+ r"""Pointnet set abstrction layer with multiscale grouping
+
+ Parameters
+ ----------
+ npoint : int
+ Number of features
+ radii : list of float32
+ list of radii to group with
+ nsamples : list of int32
+ Number of samples in each ball query
+ mlps : list of list of int32
+ Spec of the pointnet before the global max_pool for each scale
+ bn : bool
+ Use batchnorm
+ """
+
+ def __init__(self, npoint, radii, nsamples, mlps, bn=True, use_xyz=True):
+ # type: (PointnetSAModuleMSG, int, List[float], List[int], List[List[int]], bool, bool) -> None
+ super(PointnetSAModuleMSG, self).__init__()
+
+ assert len(radii) == len(nsamples) == len(mlps)
+
+ self.npoint = npoint
+ self.groupers = nn.ModuleList()
+ self.mlps = nn.ModuleList()
+ for i in range(len(radii)):
+ radius = radii[i]
+ nsample = nsamples[i]
+ self.groupers.append(
+ pointnet2_utils.QueryAndGroup(radius, nsample, use_xyz=use_xyz)
+ if npoint is not None
+ else pointnet2_utils.GroupAll(use_xyz)
+ )
+ mlp_spec = mlps[i]
+ if use_xyz:
+ mlp_spec[0] += 3
+
+ self.mlps.append(build_shared_mlp(mlp_spec, bn))
+
+
+class PointnetSAModule(PointnetSAModuleMSG):
+ r"""Pointnet set abstrction layer
+
+ Parameters
+ ----------
+ npoint : int
+ Number of features
+ radius : float
+ Radius of ball
+ nsample : int
+ Number of samples in the ball query
+ mlp : list
+ Spec of the pointnet before the global max_pool
+ bn : bool
+ Use batchnorm
+ """
+
+ def __init__(
+ self, mlp, npoint=None, radius=None, nsample=None, bn=True, use_xyz=True
+ ):
+ # type: (PointnetSAModule, List[int], int, float, int, bool, bool) -> None
+ super(PointnetSAModule, self).__init__(
+ mlps=[mlp],
+ npoint=npoint,
+ radii=[radius],
+ nsamples=[nsample],
+ bn=bn,
+ use_xyz=use_xyz,
+ )
+
+
+class PointnetFPModule(nn.Module):
+ r"""Propigates the features of one set to another
+
+ Parameters
+ ----------
+ mlp : list
+ Pointnet module parameters
+ bn : bool
+ Use batchnorm
+ """
+
+ def __init__(self, mlp, bn=True):
+ # type: (PointnetFPModule, List[int], bool) -> None
+ super(PointnetFPModule, self).__init__()
+ self.mlp = build_shared_mlp(mlp, bn=bn)
+
+ def forward(self, unknown, known, unknow_feats, known_feats):
+ # type: (PointnetFPModule, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor) -> torch.Tensor
+ r"""
+ Parameters
+ ----------
+ unknown : torch.Tensor
+ (B, n, 3) tensor of the xyz positions of the unknown features
+ known : torch.Tensor
+ (B, m, 3) tensor of the xyz positions of the known features
+ unknow_feats : torch.Tensor
+ (B, C1, n) tensor of the features to be propigated to
+ known_feats : torch.Tensor
+ (B, C2, m) tensor of features to be propigated
+
+ Returns
+ -------
+ new_features : torch.Tensor
+ (B, mlp[-1], n) tensor of the features of the unknown features
+ """
+
+ if known is not None:
+ dist, idx = pointnet2_utils.three_nn(unknown, known)
+ dist_recip = 1.0 / (dist + 1e-8)
+ norm = torch.sum(dist_recip, dim=2, keepdim=True)
+ weight = dist_recip / norm
+
+ interpolated_feats = pointnet2_utils.three_interpolate(
+ known_feats, idx, weight
+ )
+ else:
+ interpolated_feats = known_feats.expand(
+ *(known_feats.size()[0:2] + [unknown.size(1)])
+ )
+
+ if unknow_feats is not None:
+ new_features = torch.cat(
+ [interpolated_feats, unknow_feats], dim=1
+ ) # (B, C2 + C1, n)
+ else:
+ new_features = interpolated_feats
+
+ new_features = new_features.unsqueeze(-1)
+ new_features = self.mlp(new_features)
+
+ return new_features.squeeze(-1)
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/pointnet2_utils.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/pointnet2_utils.py
new file mode 100644
index 0000000..150fccc
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/pointnet2_ops/pointnet2_utils.py
@@ -0,0 +1,379 @@
+import torch
+import torch.nn as nn
+import warnings
+from torch.autograd import Function
+from typing import *
+
+try:
+ import pointnet2_ops._ext as _ext
+except ImportError:
+ from torch.utils.cpp_extension import load
+ import glob
+ import os.path as osp
+ import os
+
+ warnings.warn("Unable to load pointnet2_ops cpp extension. JIT Compiling.")
+
+ _ext_src_root = osp.join(osp.dirname(__file__), "_ext-src")
+ _ext_sources = glob.glob(osp.join(_ext_src_root, "src", "*.cpp")) + glob.glob(
+ osp.join(_ext_src_root, "src", "*.cu")
+ )
+ _ext_headers = glob.glob(osp.join(_ext_src_root, "include", "*"))
+
+ os.environ["TORCH_CUDA_ARCH_LIST"] = "3.7+PTX;5.0;6.0;6.1;6.2;7.0;7.5"
+ _ext = load(
+ "_ext",
+ sources=_ext_sources,
+ extra_include_paths=[osp.join(_ext_src_root, "include")],
+ extra_cflags=["-O3"],
+ extra_cuda_cflags=["-O3", "-Xfatbin", "-compress-all"],
+ with_cuda=True,
+ )
+
+
+class FurthestPointSampling(Function):
+ @staticmethod
+ def forward(ctx, xyz, npoint):
+ # type: (Any, torch.Tensor, int) -> torch.Tensor
+ r"""
+ Uses iterative furthest point sampling to select a set of npoint features that have the largest
+ minimum distance
+
+ Parameters
+ ----------
+ xyz : torch.Tensor
+ (B, N, 3) tensor where N > npoint
+ npoint : int32
+ number of features in the sampled set
+
+ Returns
+ -------
+ torch.Tensor
+ (B, npoint) tensor containing the set
+ """
+ out = _ext.furthest_point_sampling(xyz, npoint)
+
+ ctx.mark_non_differentiable(out)
+
+ return out
+
+ @staticmethod
+ def backward(ctx, grad_out):
+ return ()
+
+
+furthest_point_sample = FurthestPointSampling.apply
+
+
+class GatherOperation(Function):
+ @staticmethod
+ def forward(ctx, features, idx):
+ # type: (Any, torch.Tensor, torch.Tensor) -> torch.Tensor
+ r"""
+
+ Parameters
+ ----------
+ features : torch.Tensor
+ (B, C, N) tensor
+
+ idx : torch.Tensor
+ (B, npoint) tensor of the features to gather
+
+ Returns
+ -------
+ torch.Tensor
+ (B, C, npoint) tensor
+ """
+
+ ctx.save_for_backward(idx, features)
+
+ return _ext.gather_points(features, idx)
+
+ @staticmethod
+ def backward(ctx, grad_out):
+ idx, features = ctx.saved_tensors
+ N = features.size(2)
+
+ grad_features = _ext.gather_points_grad(grad_out.contiguous(), idx, N)
+ return grad_features, None
+
+
+gather_operation = GatherOperation.apply
+
+
+class ThreeNN(Function):
+ @staticmethod
+ def forward(ctx, unknown, known):
+ # type: (Any, torch.Tensor, torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]
+ r"""
+ Find the three nearest neighbors of unknown in known
+ Parameters
+ ----------
+ unknown : torch.Tensor
+ (B, n, 3) tensor of known features
+ known : torch.Tensor
+ (B, m, 3) tensor of unknown features
+
+ Returns
+ -------
+ dist : torch.Tensor
+ (B, n, 3) l2 distance to the three nearest neighbors
+ idx : torch.Tensor
+ (B, n, 3) index of 3 nearest neighbors
+ """
+ dist2, idx = _ext.three_nn(unknown, known)
+ dist = torch.sqrt(dist2)
+
+ ctx.mark_non_differentiable(dist, idx)
+
+ return dist, idx
+
+ @staticmethod
+ def backward(ctx, grad_dist, grad_idx):
+ return ()
+
+
+three_nn = ThreeNN.apply
+
+
+class ThreeInterpolate(Function):
+ @staticmethod
+ def forward(ctx, features, idx, weight):
+ # type(Any, torch.Tensor, torch.Tensor, torch.Tensor) -> Torch.Tensor
+ r"""
+ Performs weight linear interpolation on 3 features
+ Parameters
+ ----------
+ features : torch.Tensor
+ (B, c, m) Features descriptors to be interpolated from
+ idx : torch.Tensor
+ (B, n, 3) three nearest neighbors of the target features in features
+ weight : torch.Tensor
+ (B, n, 3) weights
+
+ Returns
+ -------
+ torch.Tensor
+ (B, c, n) tensor of the interpolated features
+ """
+ ctx.save_for_backward(idx, weight, features)
+
+ return _ext.three_interpolate(features, idx, weight)
+
+ @staticmethod
+ def backward(ctx, grad_out):
+ # type: (Any, torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
+ r"""
+ Parameters
+ ----------
+ grad_out : torch.Tensor
+ (B, c, n) tensor with gradients of ouputs
+
+ Returns
+ -------
+ grad_features : torch.Tensor
+ (B, c, m) tensor with gradients of features
+
+ None
+
+ None
+ """
+ idx, weight, features = ctx.saved_tensors
+ m = features.size(2)
+
+ grad_features = _ext.three_interpolate_grad(
+ grad_out.contiguous(), idx, weight, m
+ )
+
+ return grad_features, torch.zeros_like(idx), torch.zeros_like(weight)
+
+
+three_interpolate = ThreeInterpolate.apply
+
+
+class GroupingOperation(Function):
+ @staticmethod
+ def forward(ctx, features, idx):
+ # type: (Any, torch.Tensor, torch.Tensor) -> torch.Tensor
+ r"""
+
+ Parameters
+ ----------
+ features : torch.Tensor
+ (B, C, N) tensor of features to group
+ idx : torch.Tensor
+ (B, npoint, nsample) tensor containing the indicies of features to group with
+
+ Returns
+ -------
+ torch.Tensor
+ (B, C, npoint, nsample) tensor
+ """
+ ctx.save_for_backward(idx, features)
+
+ return _ext.group_points(features, idx)
+
+ @staticmethod
+ def backward(ctx, grad_out):
+ # type: (Any, torch.tensor) -> Tuple[torch.Tensor, torch.Tensor]
+ r"""
+
+ Parameters
+ ----------
+ grad_out : torch.Tensor
+ (B, C, npoint, nsample) tensor of the gradients of the output from forward
+
+ Returns
+ -------
+ torch.Tensor
+ (B, C, N) gradient of the features
+ None
+ """
+ idx, features = ctx.saved_tensors
+ N = features.size(2)
+
+ grad_features = _ext.group_points_grad(grad_out.contiguous(), idx, N)
+
+ return grad_features, torch.zeros_like(idx)
+
+
+grouping_operation = GroupingOperation.apply
+
+
+class BallQuery(Function):
+ @staticmethod
+ def forward(ctx, radius, nsample, xyz, new_xyz):
+ # type: (Any, float, int, torch.Tensor, torch.Tensor) -> torch.Tensor
+ r"""
+
+ Parameters
+ ----------
+ radius : float
+ radius of the balls
+ nsample : int
+ maximum number of features in the balls
+ xyz : torch.Tensor
+ (B, N, 3) xyz coordinates of the features
+ new_xyz : torch.Tensor
+ (B, npoint, 3) centers of the ball query
+
+ Returns
+ -------
+ torch.Tensor
+ (B, npoint, nsample) tensor with the indicies of the features that form the query balls
+ """
+ output = _ext.ball_query(new_xyz, xyz, radius, nsample)
+
+ ctx.mark_non_differentiable(output)
+
+ return output
+
+ @staticmethod
+ def backward(ctx, grad_out):
+ return ()
+
+
+ball_query = BallQuery.apply
+
+
+class QueryAndGroup(nn.Module):
+ r"""
+ Groups with a ball query of radius
+
+ Parameters
+ ---------
+ radius : float32
+ Radius of ball
+ nsample : int32
+ Maximum number of features to gather in the ball
+ """
+
+ def __init__(self, radius, nsample, use_xyz=True):
+ # type: (QueryAndGroup, float, int, bool) -> None
+ super(QueryAndGroup, self).__init__()
+ self.radius, self.nsample, self.use_xyz = radius, nsample, use_xyz
+
+ def forward(self, xyz, new_xyz, features=None):
+ # type: (QueryAndGroup, torch.Tensor. torch.Tensor, torch.Tensor) -> Tuple[Torch.Tensor]
+ r"""
+ Parameters
+ ----------
+ xyz : torch.Tensor
+ xyz coordinates of the features (B, N, 3)
+ new_xyz : torch.Tensor
+ centriods (B, npoint, 3)
+ features : torch.Tensor
+ Descriptors of the features (B, C, N)
+
+ Returns
+ -------
+ new_features : torch.Tensor
+ (B, 3 + C, npoint, nsample) tensor
+ """
+
+ idx = ball_query(self.radius, self.nsample, xyz, new_xyz)
+ xyz_trans = xyz.transpose(1, 2).contiguous()
+ grouped_xyz = grouping_operation(xyz_trans, idx) # (B, 3, npoint, nsample)
+ grouped_xyz -= new_xyz.transpose(1, 2).unsqueeze(-1)
+
+ if features is not None:
+ grouped_features = grouping_operation(features, idx)
+ if self.use_xyz:
+ new_features = torch.cat(
+ [grouped_xyz, grouped_features], dim=1
+ ) # (B, C + 3, npoint, nsample)
+ else:
+ new_features = grouped_features
+ else:
+ assert (
+ self.use_xyz
+ ), "Cannot have not features and not use xyz as a feature!"
+ new_features = grouped_xyz
+
+ return new_features
+
+
+class GroupAll(nn.Module):
+ r"""
+ Groups all features
+
+ Parameters
+ ---------
+ """
+
+ def __init__(self, use_xyz=True):
+ # type: (GroupAll, bool) -> None
+ super(GroupAll, self).__init__()
+ self.use_xyz = use_xyz
+
+ def forward(self, xyz, new_xyz, features=None):
+ # type: (GroupAll, torch.Tensor, torch.Tensor, torch.Tensor) -> Tuple[torch.Tensor]
+ r"""
+ Parameters
+ ----------
+ xyz : torch.Tensor
+ xyz coordinates of the features (B, N, 3)
+ new_xyz : torch.Tensor
+ Ignored
+ features : torch.Tensor
+ Descriptors of the features (B, C, N)
+
+ Returns
+ -------
+ new_features : torch.Tensor
+ (B, C + 3, 1, N) tensor
+ """
+
+ grouped_xyz = xyz.transpose(1, 2).unsqueeze(2)
+ if features is not None:
+ grouped_features = features.unsqueeze(2)
+ if self.use_xyz:
+ new_features = torch.cat(
+ [grouped_xyz, grouped_features], dim=1
+ ) # (B, 3 + C, 1, N)
+ else:
+ new_features = grouped_features
+ else:
+ new_features = grouped_xyz
+
+ return new_features
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/setup.py b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/setup.py
new file mode 100644
index 0000000..faf7154
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pointnet2_ops_lib/setup.py
@@ -0,0 +1,39 @@
+import glob
+import os
+import os.path as osp
+
+from setuptools import find_packages, setup
+from torch.utils.cpp_extension import BuildExtension, CUDAExtension
+
+this_dir = osp.dirname(osp.abspath(__file__))
+_ext_src_root = osp.join("pointnet2_ops", "_ext-src")
+_ext_sources = glob.glob(osp.join(_ext_src_root, "src", "*.cpp")) + glob.glob(
+ osp.join(_ext_src_root, "src", "*.cu")
+)
+_ext_headers = glob.glob(osp.join(_ext_src_root, "include", "*"))
+
+requirements = ["torch>=1.4"]
+
+exec(open(osp.join("pointnet2_ops", "_version.py")).read())
+
+os.environ["TORCH_CUDA_ARCH_LIST"] = "3.7+PTX;5.0;6.0;6.1;6.2;7.0;7.5"
+setup(
+ name="pointnet2_ops",
+ version=__version__,
+ author="Erik Wijmans",
+ packages=find_packages(),
+ install_requires=requirements,
+ ext_modules=[
+ CUDAExtension(
+ name="pointnet2_ops._ext",
+ sources=_ext_sources,
+ extra_compile_args={
+ "cxx": ["-O3"],
+ "nvcc": ["-O3", "-Xfatbin", "-compress-all"],
+ },
+ include_dirs=[osp.join(this_dir, _ext_src_root, "include")],
+ )
+ ],
+ cmdclass={"build_ext": BuildExtension},
+ include_package_data=True,
+)
diff --git a/ao-grasp/models/Pointnet2_PyTorch/pyproject.toml b/ao-grasp/models/Pointnet2_PyTorch/pyproject.toml
new file mode 100644
index 0000000..8205f39
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/pyproject.toml
@@ -0,0 +1,28 @@
+[tool.isort]
+skip_glob = ["*/deps/*", "*/build/*", "*/obselete/*"]
+known_third_party = ["h5py", "hydra", "lmdb", "msgpack_numpy", "numpy", "omegaconf", "pointnet2_ops", "pytest", "pytorch_lightning", "setuptools", "torch", "torchvision", "tqdm"]
+multi_line_output = 3
+force_grid_wrap = false
+line_length = 88
+include_trailing_comma = true
+use_parentheses = true
+
+[tool.black]
+exclude = '''
+(
+ /(
+ \.eggs # exclude a few common directories in the
+ | \.git # root of the project
+ | \.hg
+ | \.mypy_cache
+ | \.tox
+ | \.venv
+ | _build
+ | buck-out
+ | build
+ | dist
+ | obselete
+ | deps
+ )/
+)
+'''
diff --git a/ao-grasp/models/Pointnet2_PyTorch/requirements.txt b/ao-grasp/models/Pointnet2_PyTorch/requirements.txt
new file mode 100644
index 0000000..d670366
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/requirements.txt
@@ -0,0 +1,9 @@
+numpy
+msgpack-numpy
+lmdb
+h5py
+
+hydra-core==0.11.3
+pytorch-lightning==0.7.1
+
+./pointnet2_ops_lib/.
diff --git a/ao-grasp/models/Pointnet2_PyTorch/setup.py b/ao-grasp/models/Pointnet2_PyTorch/setup.py
new file mode 100644
index 0000000..fac260a
--- /dev/null
+++ b/ao-grasp/models/Pointnet2_PyTorch/setup.py
@@ -0,0 +1,16 @@
+import os.path as osp
+
+from setuptools import find_packages, setup
+
+requirements = ["hydra-core==0.11.3", "pytorch-lightning==0.7.1"]
+
+
+exec(open(osp.join("pointnet2", "_version.py")).read())
+
+setup(
+ name="pointnet2",
+ version=__version__,
+ author="Erik Wijmans",
+ packages=find_packages(),
+ install_requires=requirements,
+)