solve merge
This commit is contained in:
commit
bd27226f0f
192
Readme.md
Normal file
192
Readme.md
Normal file
@ -0,0 +1,192 @@
|
|||||||
|
# Next Best View for Reconstruction
|
||||||
|
|
||||||
|
## 1. Setup Environment
|
||||||
|
### 1.1 Install Main Project
|
||||||
|
```bash
|
||||||
|
mkdir nbv_rec
|
||||||
|
cd nbv_rec
|
||||||
|
git clone https://git.hofee.top/hofee/nbv_reconstruction.git
|
||||||
|
```
|
||||||
|
### 1.2 Install PytorchBoot
|
||||||
|
the environment is based on PytorchBoot, clone and install it from [PytorchBoot](https://git.hofee.top/hofee/PyTorchBoot.git)
|
||||||
|
```bash
|
||||||
|
git clone https://git.hofee.top/hofee/PyTorchBoot.git
|
||||||
|
cd PyTorchBoot
|
||||||
|
pip install .
|
||||||
|
cd ..
|
||||||
|
```
|
||||||
|
### 1.3 Install Blender (Optional)
|
||||||
|
If you want to render your own dataset as described in [section 2. Render Datasets](#2-render-datasets), you'll need to install Blender version 4.0 from [Blender Release](https://download.blender.org/release/Blender4.0/). Here is an example of installing Blender on Ubuntu:
|
||||||
|
```bash
|
||||||
|
wget https://download.blender.org/release/Blender4.0/blender-4.0.2-linux-x64.tar.xz
|
||||||
|
tar -xvf blender-4.0.2-linux-x64.tar.xz
|
||||||
|
```
|
||||||
|
If blender is not in your PATH, you can add it by:
|
||||||
|
```bash
|
||||||
|
export PATH=$PATH:/path/to/blender/blender-4.0.2-linux-x64
|
||||||
|
```
|
||||||
|
To run the blender script, you need to install the `pyyaml` and `scipy` package into your blender python environment. Run the following command to print the python path of your blender:
|
||||||
|
```bash
|
||||||
|
./blender -b --python-expr "import sys; print(sys.executable)"
|
||||||
|
```
|
||||||
|
Then copy the python path `/path/to/blender_python` shown in the output and run the following command to install the packages:
|
||||||
|
```bash
|
||||||
|
/path/to/blender_python -m pip install pyyaml scipy
|
||||||
|
```
|
||||||
|
### 1.4 Install Blender Render Script (Optional)
|
||||||
|
Clone the script from [nbv_rec_blender_render](https://git.hofee.top/hofee/nbv_rec_blender_render.git) and rename it to `blender`:
|
||||||
|
```bash
|
||||||
|
git clone https://git.hofee.top/hofee/nbv_rec_blender_render.git
|
||||||
|
mv nbv_rec_blender_render blender
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.5 Check Dependencies
|
||||||
|
Switch to the project root directory and run `pytorch-boot scan` or `ptb scan` to check if all dependencies are installed:
|
||||||
|
```bash
|
||||||
|
cd nbv_reconstruction
|
||||||
|
pytorch-boot scan
|
||||||
|
# or
|
||||||
|
ptb scan
|
||||||
|
```
|
||||||
|
If you see project structure information in the output, it means all dependencies are correctly installed. Otherwise, you may need to run `pip install xxx` to install the missing packages.
|
||||||
|
|
||||||
|
## 2. Render Datasets (Optional)
|
||||||
|
### 2.1 Download Object Mesh Models
|
||||||
|
Download the mesh models divided into three parts from:
|
||||||
|
- [object_meshes_part1.zip](None)
|
||||||
|
- [object_meshes_part2.zip](https://pan.baidu.com/s/1pBPhrFtBwEGp1g4vwsLIxA?pwd=1234)
|
||||||
|
- [object_meshes_part3.zip](https://pan.baidu.com/s/1peE8HqFFL0qNFhM5OC69gA?pwd=1234)
|
||||||
|
|
||||||
|
or download the whole dataset from [object_meshes.zip](https://pan.baidu.com/s/1ilWWgzg_l7_pPBv64eSgzA?pwd=1234)
|
||||||
|
|
||||||
|
Download the table model from [table.obj](https://pan.baidu.com/s/1sjjiID25Es_kmcdUIjU_Dw?pwd=1234)
|
||||||
|
|
||||||
|
### 2.2 Set Render Configurations
|
||||||
|
Open file `configs/local/view_generate_config.yaml` and modify the parameters to fit your needs. You are required to at least set the following parameters in `runner-generate`:
|
||||||
|
- `object_dir`: the directory of the downloaded object mesh models
|
||||||
|
- `output_dir`: the directory to save the rendered dataset
|
||||||
|
- `table_model_path`: the path of the downloaded table model
|
||||||
|
|
||||||
|
### 2.3 Render Dataset
|
||||||
|
|
||||||
|
There are two ways to render the dataset:
|
||||||
|
|
||||||
|
#### 2.3.1 Render with Visual Monitoring
|
||||||
|
|
||||||
|
If you want to visually monitor the rendering progress and machine resource usage:
|
||||||
|
|
||||||
|
1. In the terminal, run:
|
||||||
|
```
|
||||||
|
ptb ui
|
||||||
|
```
|
||||||
|
2. Open your browser and visit http://localhost:5000
|
||||||
|
3. Navigate to `Project Dashboard - Project Structure - Applications - generate_view`
|
||||||
|
4. Click the `Run` button to execute the rendering script
|
||||||
|
|
||||||
|
#### 2.3.2 Render in Terminal
|
||||||
|
|
||||||
|
If you don't need visual monitoring and prefer to run the rendering process directly in the terminal, simply run:
|
||||||
|
|
||||||
|
```
|
||||||
|
ptb run generate_view
|
||||||
|
```
|
||||||
|
|
||||||
|
This command will start the rendering process without launching the UI.
|
||||||
|
|
||||||
|
## 3. Preprocess
|
||||||
|
|
||||||
|
⚠️ The preprocessing code is currently not managed by `PytorchBoot`. To run the preprocessing:
|
||||||
|
|
||||||
|
1. Open the `./preprocess/preprocessor.py` file.
|
||||||
|
2. Locate the `if __name__ == "__main__":` block at the bottom of the file.
|
||||||
|
3. Specify the dataset folder by setting `root = "path/to/your/dataset"`.
|
||||||
|
4. Run the preprocessing script directly:
|
||||||
|
|
||||||
|
```
|
||||||
|
python ./preprocess/preprocessor.py
|
||||||
|
```
|
||||||
|
|
||||||
|
This will preprocess the data in the specified dataset folder.
|
||||||
|
|
||||||
|
## 4. Generate Strategy Label
|
||||||
|
|
||||||
|
### 4.1 Set Configuration
|
||||||
|
|
||||||
|
Open the file `configs/local/strategy_generate_config.yaml` and modify the parameters to fit your needs. You are required to at least set the following parameter:
|
||||||
|
|
||||||
|
- `datasets.OmniObject3d.root_dir`: the directory of your dataset
|
||||||
|
|
||||||
|
### 4.2 Generate Strategy Label
|
||||||
|
|
||||||
|
There are two ways to generate the strategy label:
|
||||||
|
|
||||||
|
#### 4.2.1 Generate with Visual Monitoring
|
||||||
|
|
||||||
|
If you want to visually monitor the generation progress and machine resource usage:
|
||||||
|
|
||||||
|
1. In the terminal, run:
|
||||||
|
```
|
||||||
|
ptb ui
|
||||||
|
```
|
||||||
|
2. Open your browser and visit http://localhost:5000
|
||||||
|
3. Navigate to Project Dashboard - Project Structure - Applications - generate_strategy
|
||||||
|
4. Click the `Run` button to execute the generation script
|
||||||
|
|
||||||
|
#### 4.2.2 Generate in Terminal
|
||||||
|
|
||||||
|
If you don't need visual monitoring and prefer to run the generation process directly in the terminal, simply run:
|
||||||
|
|
||||||
|
```
|
||||||
|
ptb run generate_strategy
|
||||||
|
```
|
||||||
|
|
||||||
|
This command will start the strategy label generation process without launching the UI.
|
||||||
|
|
||||||
|
## 5. Train
|
||||||
|
|
||||||
|
### 5.1 Set Configuration
|
||||||
|
|
||||||
|
Open the file `configs/local/train_config.yaml` and modify the parameters to fit your needs. You are required to at least set the following parameters in the `experiment` section:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
experiment:
|
||||||
|
name: your_experiment_name
|
||||||
|
root_dir: path/to/your/experiment_dir
|
||||||
|
use_checkpoint: False # if True, the checkpoint will be loaded
|
||||||
|
epoch: 600 # specific epoch to load, -1 stands for last epoch
|
||||||
|
max_epochs: 5000 # maximum epochs to train
|
||||||
|
save_checkpoint_interval: 1 # save checkpoint interval
|
||||||
|
test_first: True # if True, test process will be performed before training at each epoch
|
||||||
|
```
|
||||||
|
|
||||||
|
Adjust these parameters according to your training requirements.
|
||||||
|
|
||||||
|
|
||||||
|
### 5.2 Start Training
|
||||||
|
|
||||||
|
There are two ways to start the training process:
|
||||||
|
|
||||||
|
#### 5.2.1 Train with Visual Monitoring
|
||||||
|
|
||||||
|
If you want to visually monitor the training progress and machine resource usage:
|
||||||
|
|
||||||
|
1. In the terminal, run:
|
||||||
|
```
|
||||||
|
ptb ui
|
||||||
|
```
|
||||||
|
2. Open your browser and visit http://localhost:5000
|
||||||
|
3. Navigate to Project Dashboard - Project Structure - Applications - train
|
||||||
|
4. Click the `Run` button to start the training process
|
||||||
|
|
||||||
|
#### 5.2.2 Train in Terminal
|
||||||
|
|
||||||
|
If you don't need visual monitoring and prefer to run the training process directly in the terminal, simply run:
|
||||||
|
|
||||||
|
```
|
||||||
|
ptb run train
|
||||||
|
```
|
||||||
|
|
||||||
|
This command will start the training process without launching the UI.
|
||||||
|
|
||||||
|
## 6. Evaluation
|
||||||
|
...
|
22
TODO.md
22
TODO.md
@ -1,22 +0,0 @@
|
|||||||
# TODO
|
|
||||||
## 预处理数据
|
|
||||||
### 1. 生成view阶段
|
|
||||||
**input**: 物体mesh
|
|
||||||
|
|
||||||
### 2. 生成label阶段
|
|
||||||
**input**: 目标物体点云、目标物体点云法线、桌面扫描点、被拍到的桌面扫描点
|
|
||||||
|
|
||||||
**可以删掉的数据**: mask、normal
|
|
||||||
|
|
||||||
### 3. 训练阶段
|
|
||||||
**input**: 完整点云、pose、label
|
|
||||||
|
|
||||||
**可以删掉的数据**:depth
|
|
||||||
|
|
||||||
### view生成后
|
|
||||||
预处理目标物体点云、目标物体点云法线、桌面扫描点、被拍到的桌面扫描点、完整点云
|
|
||||||
|
|
||||||
删除depth、mask、normal
|
|
||||||
|
|
||||||
### label生成后
|
|
||||||
只上传:完整点云、pose、label
|
|
@ -5,5 +5,5 @@ from runners.strategy_generator import StrategyGenerator
|
|||||||
class DataGenerateApp:
|
class DataGenerateApp:
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def start():
|
def start():
|
||||||
StrategyGenerator("configs/server/server_strategy_generate_config.yaml").run()
|
StrategyGenerator("configs/local/strategy_generate_config.yaml").run()
|
||||||
|
|
@ -12,14 +12,9 @@ runner:
|
|||||||
|
|
||||||
generate:
|
generate:
|
||||||
voxel_threshold: 0.003
|
voxel_threshold: 0.003
|
||||||
soft_overlap_threshold: 0.3
|
overlap_area_threshold: 25
|
||||||
hard_overlap_threshold: 0.6
|
compute_with_normal: False
|
||||||
filter_degree: 75
|
scan_points_threshold: 10
|
||||||
to_specified_dir: True # if True, output_dir is used, otherwise, root_dir is used
|
|
||||||
save_points: True
|
|
||||||
load_points: True
|
|
||||||
save_best_combined_points: False
|
|
||||||
save_mesh: True
|
|
||||||
overwrite: False
|
overwrite: False
|
||||||
seq_num: 15
|
seq_num: 15
|
||||||
dataset_list:
|
dataset_list:
|
||||||
@ -27,11 +22,8 @@ runner:
|
|||||||
|
|
||||||
datasets:
|
datasets:
|
||||||
OmniObject3d:
|
OmniObject3d:
|
||||||
#"/media/hofee/data/data/temp_output"
|
root_dir: C:\\Document\\Local Project\\nbv_rec\\nbv_reconstruction\\temp
|
||||||
root_dir: /media/hofee/repository/full_data_output
|
|
||||||
model_dir: /media/hofee/data/data/scaled_object_meshes
|
|
||||||
from: 0
|
from: 0
|
||||||
to: -1 # -1 means end
|
to: 1 # -1 means end
|
||||||
#output_dir: "/media/hofee/data/data/label_output"
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -7,12 +7,12 @@ runner:
|
|||||||
name: debug
|
name: debug
|
||||||
root_dir: experiments
|
root_dir: experiments
|
||||||
generate:
|
generate:
|
||||||
port: 5004
|
port: 5002
|
||||||
from: 0
|
from: 600
|
||||||
to: 1 # -1 means all
|
to: -1 # -1 means all
|
||||||
object_dir: H:\\AI\\Datasets\\scaled_object_box_meshes
|
object_dir: /media/hofee/data/data/object_meshes_part1
|
||||||
table_model_path: "H:\\AI\\Datasets\\table.obj"
|
table_model_path: "/media/hofee/data/data/others/table.obj"
|
||||||
output_dir: C:\\Document\\Local Project\\nbv_rec\\nbv_reconstruction\\temp
|
output_dir: /media/hofee/repository/data_part_1
|
||||||
binocular_vision: true
|
binocular_vision: true
|
||||||
plane_size: 10
|
plane_size: 10
|
||||||
max_views: 512
|
max_views: 512
|
||||||
|
@ -1,37 +0,0 @@
|
|||||||
|
|
||||||
runner:
|
|
||||||
general:
|
|
||||||
seed: 0
|
|
||||||
device: cpu
|
|
||||||
cuda_visible_devices: "0,1,2,3,4,5,6,7"
|
|
||||||
|
|
||||||
|
|
||||||
experiment:
|
|
||||||
name: debug
|
|
||||||
root_dir: "experiments"
|
|
||||||
|
|
||||||
generate:
|
|
||||||
voxel_threshold: 0.003
|
|
||||||
soft_overlap_threshold: 0.3
|
|
||||||
hard_overlap_threshold: 0.6
|
|
||||||
filter_degree: 75
|
|
||||||
to_specified_dir: True # if True, output_dir is used, otherwise, root_dir is used
|
|
||||||
save_points: True
|
|
||||||
load_points: True
|
|
||||||
save_best_combined_points: False
|
|
||||||
save_mesh: True
|
|
||||||
overwrite: False
|
|
||||||
seq_num: 15
|
|
||||||
dataset_list:
|
|
||||||
- OmniObject3d
|
|
||||||
|
|
||||||
datasets:
|
|
||||||
OmniObject3d:
|
|
||||||
#"/media/hofee/data/data/temp_output"
|
|
||||||
root_dir: /data/hofee/data/packed_preprocessed_data
|
|
||||||
model_dir: /media/hofee/data/data/scaled_object_meshes
|
|
||||||
from: 0
|
|
||||||
to: -1 # -1 means end
|
|
||||||
#output_dir: "/media/hofee/data/data/label_output"
|
|
||||||
|
|
||||||
|
|
43
preprocess/clean_preprocessed_data.py
Normal file
43
preprocess/clean_preprocessed_data.py
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
|
||||||
|
def clean_scene_data(root, scene):
|
||||||
|
# 清理目标点云数据
|
||||||
|
pts_dir = os.path.join(root, scene, "pts")
|
||||||
|
if os.path.exists(pts_dir):
|
||||||
|
shutil.rmtree(pts_dir)
|
||||||
|
print(f"已删除 {pts_dir}")
|
||||||
|
|
||||||
|
# 清理法线数据
|
||||||
|
nrm_dir = os.path.join(root, scene, "nrm")
|
||||||
|
if os.path.exists(nrm_dir):
|
||||||
|
shutil.rmtree(nrm_dir)
|
||||||
|
print(f"已删除 {nrm_dir}")
|
||||||
|
|
||||||
|
# 清理扫描点索引数据
|
||||||
|
scan_points_indices_dir = os.path.join(root, scene, "scan_points_indices")
|
||||||
|
if os.path.exists(scan_points_indices_dir):
|
||||||
|
shutil.rmtree(scan_points_indices_dir)
|
||||||
|
print(f"已删除 {scan_points_indices_dir}")
|
||||||
|
|
||||||
|
# 删除扫描点数据文件
|
||||||
|
scan_points_file = os.path.join(root, scene, "scan_points.txt")
|
||||||
|
if os.path.exists(scan_points_file):
|
||||||
|
os.remove(scan_points_file)
|
||||||
|
print(f"已删除 {scan_points_file}")
|
||||||
|
|
||||||
|
def clean_all_scenes(root, scene_list):
|
||||||
|
for idx, scene in enumerate(scene_list):
|
||||||
|
print(f"正在清理场景 {scene} ({idx+1}/{len(scene_list)})")
|
||||||
|
clean_scene_data(root, scene)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
root = r"c:\Document\Local Project\nbv_rec\nbv_reconstruction\temp"
|
||||||
|
scene_list = os.listdir(root)
|
||||||
|
from_idx = 0
|
||||||
|
to_idx = len(scene_list)
|
||||||
|
print(f"正在清理场景 {scene_list[from_idx:to_idx]}")
|
||||||
|
|
||||||
|
clean_all_scenes(root, scene_list[from_idx:to_idx])
|
||||||
|
print("清理完成")
|
||||||
|
|
@ -9,8 +9,6 @@ from utils.reconstruction import ReconstructionUtil
|
|||||||
from utils.data_load import DataLoadUtil
|
from utils.data_load import DataLoadUtil
|
||||||
from utils.pts import PtsUtil
|
from utils.pts import PtsUtil
|
||||||
|
|
||||||
# scan shoe 536
|
|
||||||
|
|
||||||
def save_np_pts(path, pts: np.ndarray, file_type="txt"):
|
def save_np_pts(path, pts: np.ndarray, file_type="txt"):
|
||||||
if file_type == "txt":
|
if file_type == "txt":
|
||||||
np.savetxt(path, pts)
|
np.savetxt(path, pts)
|
||||||
@ -23,6 +21,12 @@ def save_target_points(root, scene, frame_idx, target_points: np.ndarray, file_t
|
|||||||
if not os.path.exists(os.path.join(root,scene, "pts")):
|
if not os.path.exists(os.path.join(root,scene, "pts")):
|
||||||
os.makedirs(os.path.join(root,scene, "pts"))
|
os.makedirs(os.path.join(root,scene, "pts"))
|
||||||
save_np_pts(pts_path, target_points, file_type)
|
save_np_pts(pts_path, target_points, file_type)
|
||||||
|
|
||||||
|
def save_target_normals(root, scene, frame_idx, target_normals: np.ndarray, file_type="txt"):
|
||||||
|
pts_path = os.path.join(root,scene, "nrm", f"{frame_idx}.{file_type}")
|
||||||
|
if not os.path.exists(os.path.join(root,scene, "nrm")):
|
||||||
|
os.makedirs(os.path.join(root,scene, "nrm"))
|
||||||
|
save_np_pts(pts_path, target_normals, file_type)
|
||||||
|
|
||||||
def save_scan_points_indices(root, scene, frame_idx, scan_points_indices: np.ndarray, file_type="txt"):
|
def save_scan_points_indices(root, scene, frame_idx, scan_points_indices: np.ndarray, file_type="txt"):
|
||||||
indices_path = os.path.join(root,scene, "scan_points_indices", f"{frame_idx}.{file_type}")
|
indices_path = os.path.join(root,scene, "scan_points_indices", f"{frame_idx}.{file_type}")
|
||||||
@ -87,8 +91,8 @@ def get_scan_points_indices(scan_points, mask, display_table_mask_label, cam_int
|
|||||||
def save_scene_data(root, scene, scene_idx=0, scene_total=1,file_type="txt"):
|
def save_scene_data(root, scene, scene_idx=0, scene_total=1,file_type="txt"):
|
||||||
|
|
||||||
''' configuration '''
|
''' configuration '''
|
||||||
target_mask_label = (0, 255, 0, 255)
|
target_mask_label = (0, 255, 0)
|
||||||
display_table_mask_label=(0, 0, 255, 255)
|
display_table_mask_label=(0, 0, 255)
|
||||||
random_downsample_N = 32768
|
random_downsample_N = 32768
|
||||||
voxel_size=0.003
|
voxel_size=0.003
|
||||||
filter_degree = 75
|
filter_degree = 75
|
||||||
@ -137,7 +141,7 @@ def save_scene_data(root, scene, scene_idx=0, scene_total=1,file_type="txt"):
|
|||||||
has_points = target_points.shape[0] > 0
|
has_points = target_points.shape[0] > 0
|
||||||
|
|
||||||
if has_points:
|
if has_points:
|
||||||
target_points = PtsUtil.filter_points(
|
target_points, target_normals = PtsUtil.filter_points(
|
||||||
target_points, sampled_target_normal_L, cam_info["cam_to_world"], theta_limit = filter_degree, z_range=(min_z, max_z)
|
target_points, sampled_target_normal_L, cam_info["cam_to_world"], theta_limit = filter_degree, z_range=(min_z, max_z)
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -149,8 +153,10 @@ def save_scene_data(root, scene, scene_idx=0, scene_total=1,file_type="txt"):
|
|||||||
|
|
||||||
if not has_points:
|
if not has_points:
|
||||||
target_points = np.zeros((0, 3))
|
target_points = np.zeros((0, 3))
|
||||||
|
target_normals = np.zeros((0, 3))
|
||||||
|
|
||||||
save_target_points(root, scene, frame_id, target_points, file_type=file_type)
|
save_target_points(root, scene, frame_id, target_points, file_type=file_type)
|
||||||
|
save_target_normals(root, scene, frame_id, target_normals, file_type=file_type)
|
||||||
save_scan_points_indices(root, scene, frame_id, scan_points_indices, file_type=file_type)
|
save_scan_points_indices(root, scene, frame_id, scan_points_indices, file_type=file_type)
|
||||||
|
|
||||||
save_scan_points(root, scene, scan_points) # The "done" flag of scene preprocess
|
save_scan_points(root, scene, scan_points) # The "done" flag of scene preprocess
|
||||||
@ -158,17 +164,10 @@ def save_scene_data(root, scene, scene_idx=0, scene_total=1,file_type="txt"):
|
|||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
#root = "/media/hofee/repository/new_data_with_normal"
|
#root = "/media/hofee/repository/new_data_with_normal"
|
||||||
root = r"C:\\Document\\Local Project\\nbv_rec\\nbv_reconstruction\\temp"
|
root = r"C:\Document\Datasets\nbv_rec_part2"
|
||||||
# list_path = r"/media/hofee/repository/full_list.txt"
|
|
||||||
# scene_list = []
|
|
||||||
|
|
||||||
# with open(list_path, "r") as f:
|
|
||||||
# for line in f:
|
|
||||||
# scene_list.append(line.strip())
|
|
||||||
scene_list = os.listdir(root)
|
scene_list = os.listdir(root)
|
||||||
from_idx = 0 # 1000
|
from_idx = 600 # 1000
|
||||||
to_idx = 1 # 1500
|
to_idx = len(scene_list) # 1500
|
||||||
print(scene_list)
|
|
||||||
|
|
||||||
|
|
||||||
cnt = 0
|
cnt = 0
|
||||||
@ -176,6 +175,10 @@ if __name__ == "__main__":
|
|||||||
total = to_idx - from_idx
|
total = to_idx - from_idx
|
||||||
for scene in scene_list[from_idx:to_idx]:
|
for scene in scene_list[from_idx:to_idx]:
|
||||||
start = time.time()
|
start = time.time()
|
||||||
|
if os.path.exists(os.path.join(root, scene, "scan_points.txt")):
|
||||||
|
print(f"Scene {scene} has been processed")
|
||||||
|
cnt+=1
|
||||||
|
continue
|
||||||
save_scene_data(root, scene, cnt, total, file_type="npy")
|
save_scene_data(root, scene, cnt, total, file_type="npy")
|
||||||
cnt+=1
|
cnt+=1
|
||||||
end = time.time()
|
end = time.time()
|
||||||
|
@ -22,20 +22,17 @@ class StrategyGenerator(Runner):
|
|||||||
"app_name": "generate_strategy",
|
"app_name": "generate_strategy",
|
||||||
"runner_name": "strategy_generator"
|
"runner_name": "strategy_generator"
|
||||||
}
|
}
|
||||||
self.to_specified_dir = ConfigManager.get("runner", "generate", "to_specified_dir")
|
|
||||||
self.save_best_combined_pts = ConfigManager.get("runner", "generate", "save_best_combined_points")
|
|
||||||
self.save_mesh = ConfigManager.get("runner", "generate", "save_mesh")
|
|
||||||
self.load_pts = ConfigManager.get("runner", "generate", "load_points")
|
|
||||||
self.filter_degree = ConfigManager.get("runner", "generate", "filter_degree")
|
|
||||||
self.overwrite = ConfigManager.get("runner", "generate", "overwrite")
|
self.overwrite = ConfigManager.get("runner", "generate", "overwrite")
|
||||||
self.save_pts = ConfigManager.get("runner","generate","save_points")
|
|
||||||
self.seq_num = ConfigManager.get("runner","generate","seq_num")
|
self.seq_num = ConfigManager.get("runner","generate","seq_num")
|
||||||
|
self.overlap_area_threshold = ConfigManager.get("runner","generate","overlap_area_threshold")
|
||||||
|
self.compute_with_normal = ConfigManager.get("runner","generate","compute_with_normal")
|
||||||
|
self.scan_points_threshold = ConfigManager.get("runner","generate","scan_points_threshold")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
dataset_name_list = ConfigManager.get("runner", "generate", "dataset_list")
|
dataset_name_list = ConfigManager.get("runner", "generate", "dataset_list")
|
||||||
voxel_threshold, soft_overlap_threshold, hard_overlap_threshold = ConfigManager.get("runner","generate","voxel_threshold"), ConfigManager.get("runner","generate","soft_overlap_threshold"), ConfigManager.get("runner","generate","hard_overlap_threshold")
|
voxel_threshold = ConfigManager.get("runner","generate","voxel_threshold")
|
||||||
for dataset_idx in range(len(dataset_name_list)):
|
for dataset_idx in range(len(dataset_name_list)):
|
||||||
dataset_name = dataset_name_list[dataset_idx]
|
dataset_name = dataset_name_list[dataset_idx]
|
||||||
status_manager.set_progress("generate_strategy", "strategy_generator", "dataset", dataset_idx, len(dataset_name_list))
|
status_manager.set_progress("generate_strategy", "strategy_generator", "dataset", dataset_idx, len(dataset_name_list))
|
||||||
@ -57,7 +54,7 @@ class StrategyGenerator(Runner):
|
|||||||
cnt += 1
|
cnt += 1
|
||||||
continue
|
continue
|
||||||
|
|
||||||
self.generate_sequence(root_dir, scene_name,voxel_threshold, soft_overlap_threshold, hard_overlap_threshold)
|
self.generate_sequence(root_dir, scene_name,voxel_threshold)
|
||||||
cnt += 1
|
cnt += 1
|
||||||
status_manager.set_progress("generate_strategy", "strategy_generator", "scene", total, total)
|
status_manager.set_progress("generate_strategy", "strategy_generator", "scene", total, total)
|
||||||
status_manager.set_progress("generate_strategy", "strategy_generator", "dataset", len(dataset_name_list), len(dataset_name_list))
|
status_manager.set_progress("generate_strategy", "strategy_generator", "dataset", len(dataset_name_list), len(dataset_name_list))
|
||||||
@ -70,28 +67,34 @@ class StrategyGenerator(Runner):
|
|||||||
def load_experiment(self, backup_name=None):
|
def load_experiment(self, backup_name=None):
|
||||||
super().load_experiment(backup_name)
|
super().load_experiment(backup_name)
|
||||||
|
|
||||||
def generate_sequence(self, root, scene_name, voxel_threshold, soft_overlap_threshold, hard_overlap_threshold):
|
def generate_sequence(self, root, scene_name, voxel_threshold):
|
||||||
status_manager.set_status("generate_strategy", "strategy_generator", "scene", scene_name)
|
status_manager.set_status("generate_strategy", "strategy_generator", "scene", scene_name)
|
||||||
frame_num = DataLoadUtil.get_scene_seq_length(root, scene_name)
|
frame_num = DataLoadUtil.get_scene_seq_length(root, scene_name)
|
||||||
|
|
||||||
model_points_normals = DataLoadUtil.load_points_normals(root, scene_name)
|
model_points_normals = DataLoadUtil.load_points_normals(root, scene_name)
|
||||||
model_pts = model_points_normals[:,:3]
|
model_pts = model_points_normals[:,:3]
|
||||||
down_sampled_model_pts = PtsUtil.voxel_downsample_point_cloud(model_pts, voxel_threshold)
|
down_sampled_model_pts, idx = PtsUtil.voxel_downsample_point_cloud(model_pts, voxel_threshold, require_idx=True)
|
||||||
|
down_sampled_model_nrm = model_points_normals[idx, 3:]
|
||||||
pts_list = []
|
pts_list = []
|
||||||
|
nrm_list = []
|
||||||
scan_points_indices_list = []
|
scan_points_indices_list = []
|
||||||
non_zero_cnt = 0
|
non_zero_cnt = 0
|
||||||
|
|
||||||
for frame_idx in range(frame_num):
|
for frame_idx in range(frame_num):
|
||||||
status_manager.set_progress("generate_strategy", "strategy_generator", "loading frame", frame_idx, frame_num)
|
status_manager.set_progress("generate_strategy", "strategy_generator", "loading frame", frame_idx, frame_num)
|
||||||
pts_path = os.path.join(root,scene_name, "pts", f"{frame_idx}.npy")
|
pts_path = os.path.join(root,scene_name, "pts", f"{frame_idx}.npy")
|
||||||
|
nrm_path = os.path.join(root,scene_name, "nrm", f"{frame_idx}.npy")
|
||||||
idx_path = os.path.join(root,scene_name, "scan_points_indices", f"{frame_idx}.npy")
|
idx_path = os.path.join(root,scene_name, "scan_points_indices", f"{frame_idx}.npy")
|
||||||
point_cloud = np.load(pts_path)
|
pts = np.load(pts_path)
|
||||||
sampled_point_cloud = PtsUtil.voxel_downsample_point_cloud(point_cloud, voxel_threshold)
|
if pts.shape[0] == 0:
|
||||||
|
nrm = np.zeros((0,3))
|
||||||
|
else:
|
||||||
|
nrm = np.load(nrm_path)
|
||||||
indices = np.load(idx_path)
|
indices = np.load(idx_path)
|
||||||
pts_list.append(sampled_point_cloud)
|
pts_list.append(pts)
|
||||||
|
nrm_list.append(nrm)
|
||||||
scan_points_indices_list.append(indices)
|
scan_points_indices_list.append(indices)
|
||||||
if sampled_point_cloud.shape[0] > 0:
|
if pts.shape[0] > 0:
|
||||||
non_zero_cnt += 1
|
non_zero_cnt += 1
|
||||||
status_manager.set_progress("generate_strategy", "strategy_generator", "loading frame", frame_num, frame_num)
|
status_manager.set_progress("generate_strategy", "strategy_generator", "loading frame", frame_num, frame_num)
|
||||||
|
|
||||||
@ -99,7 +102,7 @@ class StrategyGenerator(Runner):
|
|||||||
init_view_list = []
|
init_view_list = []
|
||||||
idx = 0
|
idx = 0
|
||||||
while len(init_view_list) < seq_num and idx < len(pts_list):
|
while len(init_view_list) < seq_num and idx < len(pts_list):
|
||||||
if pts_list[idx].shape[0] > 100:
|
if pts_list[idx].shape[0] > 50:
|
||||||
init_view_list.append(idx)
|
init_view_list.append(idx)
|
||||||
idx += 1
|
idx += 1
|
||||||
|
|
||||||
@ -108,8 +111,13 @@ class StrategyGenerator(Runner):
|
|||||||
for init_view in init_view_list:
|
for init_view in init_view_list:
|
||||||
status_manager.set_progress("generate_strategy", "strategy_generator", "computing sequence", seq_idx, len(init_view_list))
|
status_manager.set_progress("generate_strategy", "strategy_generator", "computing sequence", seq_idx, len(init_view_list))
|
||||||
start = time.time()
|
start = time.time()
|
||||||
limited_useful_view, _, _ = ReconstructionUtil.compute_next_best_view_sequence_with_overlap(down_sampled_model_pts, pts_list, scan_points_indices_list = scan_points_indices_list,init_view=init_view,
|
|
||||||
threshold=voxel_threshold, soft_overlap_threshold=soft_overlap_threshold, hard_overlap_threshold= hard_overlap_threshold, scan_points_threshold=10, status_info=self.status_info)
|
if not self.compute_with_normal:
|
||||||
|
limited_useful_view, _, _ = ReconstructionUtil.compute_next_best_view_sequence(down_sampled_model_pts, pts_list, scan_points_indices_list = scan_points_indices_list,init_view=init_view,
|
||||||
|
threshold=voxel_threshold, scan_points_threshold=self.scan_points_threshold, overlap_area_threshold=self.overlap_area_threshold, status_info=self.status_info)
|
||||||
|
else:
|
||||||
|
limited_useful_view, _, _ = ReconstructionUtil.compute_next_best_view_sequence_with_normal(down_sampled_model_pts, down_sampled_model_nrm, pts_list, nrm_list, scan_points_indices_list = scan_points_indices_list,init_view=init_view,
|
||||||
|
threshold=voxel_threshold, scan_points_threshold=self.scan_points_threshold, overlap_area_threshold=self.overlap_area_threshold, status_info=self.status_info)
|
||||||
end = time.time()
|
end = time.time()
|
||||||
print(f"Time: {end-start}")
|
print(f"Time: {end-start}")
|
||||||
data_pairs = self.generate_data_pairs(limited_useful_view)
|
data_pairs = self.generate_data_pairs(limited_useful_view)
|
||||||
|
@ -9,7 +9,7 @@ class ViewGenerator(Runner):
|
|||||||
self.config_path = config_path
|
self.config_path = config_path
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
result = subprocess.run(['blender', '-b', '-P', '../blender/run_blender.py', '--', self.config_path])
|
result = subprocess.run(['/home/hofee/blender-4.0.2-linux-x64/blender', '-b', '-P', '../blender/run_blender.py', '--', self.config_path])
|
||||||
print()
|
print()
|
||||||
|
|
||||||
def create_experiment(self, backup_name=None):
|
def create_experiment(self, backup_name=None):
|
||||||
|
@ -14,23 +14,16 @@ class DataLoadUtil:
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def load_exr_image(file_path):
|
def load_exr_image(file_path):
|
||||||
# 打开 EXR 文件
|
|
||||||
exr_file = OpenEXR.InputFile(file_path)
|
exr_file = OpenEXR.InputFile(file_path)
|
||||||
|
|
||||||
# 获取 EXR 文件的头部信息,包括尺寸
|
|
||||||
header = exr_file.header()
|
header = exr_file.header()
|
||||||
dw = header['dataWindow']
|
dw = header['dataWindow']
|
||||||
width = dw.max.x - dw.min.x + 1
|
width = dw.max.x - dw.min.x + 1
|
||||||
height = dw.max.y - dw.min.y + 1
|
height = dw.max.y - dw.min.y + 1
|
||||||
|
|
||||||
# 定义通道,通常法线图像是 RGB
|
|
||||||
float_channels = ['R', 'G', 'B']
|
float_channels = ['R', 'G', 'B']
|
||||||
|
|
||||||
# 读取 EXR 文件中的每个通道并转化为浮点数数组
|
|
||||||
img_data = []
|
img_data = []
|
||||||
for channel in float_channels:
|
for channel in float_channels:
|
||||||
channel_data = exr_file.channel(channel, Imath.PixelType(Imath.PixelType.FLOAT))
|
channel_data = exr_file.channel(channel)
|
||||||
img_data.append(np.frombuffer(channel_data, dtype=np.float32).reshape((height, width)))
|
img_data.append(np.frombuffer(channel_data, dtype=np.float16).reshape((height, width)))
|
||||||
|
|
||||||
# 将各通道组合成一个 (height, width, 3) 的 RGB 图像
|
# 将各通道组合成一个 (height, width, 3) 的 RGB 图像
|
||||||
img = np.stack(img_data, axis=-1)
|
img = np.stack(img_data, axis=-1)
|
||||||
@ -143,8 +136,8 @@ class DataLoadUtil:
|
|||||||
if binocular and not left_only:
|
if binocular and not left_only:
|
||||||
|
|
||||||
def clean_mask(mask_image):
|
def clean_mask(mask_image):
|
||||||
green = [0, 255, 0, 255]
|
green = [0, 255, 0]
|
||||||
red = [255, 0, 0, 255]
|
red = [255, 0, 0]
|
||||||
threshold = 2
|
threshold = 2
|
||||||
mask_image = np.where(
|
mask_image = np.where(
|
||||||
np.abs(mask_image - green) <= threshold, green, mask_image
|
np.abs(mask_image - green) <= threshold, green, mask_image
|
||||||
|
19
utils/pts.py
19
utils/pts.py
@ -5,10 +5,17 @@ import torch
|
|||||||
class PtsUtil:
|
class PtsUtil:
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def voxel_downsample_point_cloud(point_cloud, voxel_size=0.005):
|
def voxel_downsample_point_cloud(point_cloud, voxel_size=0.005, require_idx=False):
|
||||||
voxel_indices = np.floor(point_cloud / voxel_size).astype(np.int32)
|
voxel_indices = np.floor(point_cloud / voxel_size).astype(np.int32)
|
||||||
unique_voxels = np.unique(voxel_indices, axis=0, return_inverse=True)
|
if require_idx:
|
||||||
return unique_voxels[0]*voxel_size
|
_, inverse, counts = np.unique(voxel_indices, axis=0, return_inverse=True, return_counts=True)
|
||||||
|
idx_sort = np.argsort(inverse)
|
||||||
|
idx_unique = idx_sort[np.cumsum(counts)-counts]
|
||||||
|
downsampled_points = point_cloud[idx_unique]
|
||||||
|
return downsampled_points, idx_unique
|
||||||
|
else:
|
||||||
|
unique_voxels = np.unique(voxel_indices, axis=0, return_inverse=True)
|
||||||
|
return unique_voxels[0]*voxel_size
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def random_downsample_point_cloud(point_cloud, num_points, require_idx=False):
|
def random_downsample_point_cloud(point_cloud, num_points, require_idx=False):
|
||||||
@ -84,14 +91,14 @@ class PtsUtil:
|
|||||||
theta = np.arccos(cos_theta) * 180 / np.pi
|
theta = np.arccos(cos_theta) * 180 / np.pi
|
||||||
idx = theta < theta_limit
|
idx = theta < theta_limit
|
||||||
filtered_sampled_points = points[idx]
|
filtered_sampled_points = points[idx]
|
||||||
|
filtered_normals = normals[idx]
|
||||||
|
|
||||||
""" filter with z range """
|
""" filter with z range """
|
||||||
points_cam = PtsUtil.transform_point_cloud(filtered_sampled_points, np.linalg.inv(cam_pose))
|
points_cam = PtsUtil.transform_point_cloud(filtered_sampled_points, np.linalg.inv(cam_pose))
|
||||||
idx = (points_cam[:, 2] > z_range[0]) & (points_cam[:, 2] < z_range[1])
|
idx = (points_cam[:, 2] > z_range[0]) & (points_cam[:, 2] < z_range[1])
|
||||||
z_filtered_points = filtered_sampled_points[idx]
|
z_filtered_points = filtered_sampled_points[idx]
|
||||||
|
z_filtered_normals = filtered_normals[idx]
|
||||||
return z_filtered_points[:, :3]
|
return z_filtered_points[:, :3], z_filtered_normals
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def point_to_hash(point, voxel_size):
|
def point_to_hash(point, voxel_size):
|
||||||
|
@ -8,16 +8,23 @@ class ReconstructionUtil:
|
|||||||
def compute_coverage_rate(target_point_cloud, combined_point_cloud, threshold=0.01):
|
def compute_coverage_rate(target_point_cloud, combined_point_cloud, threshold=0.01):
|
||||||
kdtree = cKDTree(combined_point_cloud)
|
kdtree = cKDTree(combined_point_cloud)
|
||||||
distances, _ = kdtree.query(target_point_cloud)
|
distances, _ = kdtree.query(target_point_cloud)
|
||||||
covered_points_num = np.sum(distances < threshold)
|
covered_points_num = np.sum(distances < threshold*2)
|
||||||
coverage_rate = covered_points_num / target_point_cloud.shape[0]
|
coverage_rate = covered_points_num / target_point_cloud.shape[0]
|
||||||
return coverage_rate, covered_points_num
|
return coverage_rate, covered_points_num
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
def compute_coverage_rate_with_normal(target_point_cloud, combined_point_cloud, target_normal, combined_normal, threshold=0.01, normal_threshold=0.1):
|
def compute_coverage_rate_with_normal(target_point_cloud, combined_point_cloud, target_normal, combined_normal, threshold=0.01, normal_threshold=0.1):
|
||||||
kdtree = cKDTree(combined_point_cloud)
|
kdtree = cKDTree(combined_point_cloud)
|
||||||
distances, indices = kdtree.query(target_point_cloud)
|
distances, indices = kdtree.query(target_point_cloud)
|
||||||
is_covered_by_distance = distances < threshold
|
is_covered_by_distance = distances < threshold*2
|
||||||
normal_dots = np.einsum('ij,ij->i', target_normal, combined_normal[indices])
|
normal_dots = np.einsum('ij,ij->i', target_normal, combined_normal[indices])
|
||||||
is_covered_by_normal = normal_dots > normal_threshold
|
is_covered_by_normal = normal_dots > normal_threshold
|
||||||
|
|
||||||
|
pts_nrm_target = np.hstack([target_point_cloud, target_normal])
|
||||||
|
np.savetxt("pts_nrm_target.txt", pts_nrm_target)
|
||||||
|
pts_nrm_combined = np.hstack([combined_point_cloud, combined_normal])
|
||||||
|
np.savetxt("pts_nrm_combined.txt", pts_nrm_combined)
|
||||||
|
import ipdb; ipdb.set_trace()
|
||||||
covered_points_num = np.sum(is_covered_by_distance & is_covered_by_normal)
|
covered_points_num = np.sum(is_covered_by_distance & is_covered_by_normal)
|
||||||
coverage_rate = covered_points_num / target_point_cloud.shape[0]
|
coverage_rate = covered_points_num / target_point_cloud.shape[0]
|
||||||
|
|
||||||
@ -25,15 +32,14 @@ class ReconstructionUtil:
|
|||||||
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def compute_overlap_rate(new_point_cloud, combined_point_cloud, threshold=0.01):
|
def check_overlap(new_point_cloud, combined_point_cloud, overlap_area_threshold=25, voxel_size=0.01):
|
||||||
kdtree = cKDTree(combined_point_cloud)
|
kdtree = cKDTree(combined_point_cloud)
|
||||||
distances, _ = kdtree.query(new_point_cloud)
|
distances, _ = kdtree.query(new_point_cloud)
|
||||||
overlapping_points = np.sum(distances < threshold)
|
overlapping_points = np.sum(distances < voxel_size*2)
|
||||||
if new_point_cloud.shape[0] == 0:
|
cm = 0.01
|
||||||
overlap_rate = 0
|
voxel_size_cm = voxel_size / cm
|
||||||
else:
|
overlap_area = overlapping_points * voxel_size_cm * voxel_size_cm
|
||||||
overlap_rate = overlapping_points / new_point_cloud.shape[0]
|
return overlap_area > overlap_area_threshold
|
||||||
return overlap_rate
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@ -49,7 +55,7 @@ class ReconstructionUtil:
|
|||||||
return new_added_points
|
return new_added_points
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def compute_next_best_view_sequence_with_overlap(target_point_cloud, point_cloud_list, scan_points_indices_list, threshold=0.01, soft_overlap_threshold=0.5, hard_overlap_threshold=0.7, init_view = 0, scan_points_threshold=5, status_info=None):
|
def compute_next_best_view_sequence(target_point_cloud, point_cloud_list, scan_points_indices_list, threshold=0.01, overlap_area_threshold=25, init_view = 0, scan_points_threshold=5, status_info=None):
|
||||||
selected_views = [init_view]
|
selected_views = [init_view]
|
||||||
combined_point_cloud = point_cloud_list[init_view]
|
combined_point_cloud = point_cloud_list[init_view]
|
||||||
history_indices = [scan_points_indices_list[init_view]]
|
history_indices = [scan_points_indices_list[init_view]]
|
||||||
@ -83,22 +89,16 @@ class ReconstructionUtil:
|
|||||||
if selected_views:
|
if selected_views:
|
||||||
new_scan_points_indices = scan_points_indices_list[view_index]
|
new_scan_points_indices = scan_points_indices_list[view_index]
|
||||||
if not ReconstructionUtil.check_scan_points_overlap(history_indices, new_scan_points_indices, scan_points_threshold):
|
if not ReconstructionUtil.check_scan_points_overlap(history_indices, new_scan_points_indices, scan_points_threshold):
|
||||||
overlap_threshold = hard_overlap_threshold
|
curr_overlap_area_threshold = overlap_area_threshold
|
||||||
else:
|
else:
|
||||||
overlap_threshold = soft_overlap_threshold
|
curr_overlap_area_threshold = overlap_area_threshold * 0.5
|
||||||
start = time.time()
|
|
||||||
overlap_rate = ReconstructionUtil.compute_overlap_rate(point_cloud_list[view_index],combined_point_cloud, threshold)
|
if not ReconstructionUtil.check_overlap(point_cloud_list[view_index], combined_point_cloud, overlap_area_threshold = curr_overlap_area_threshold, voxel_size=threshold):
|
||||||
end = time.time()
|
|
||||||
# print(f"overlap_rate Time: {end-start}")
|
|
||||||
if overlap_rate < overlap_threshold:
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
start = time.time()
|
|
||||||
new_combined_point_cloud = np.vstack([combined_point_cloud, point_cloud_list[view_index]])
|
new_combined_point_cloud = np.vstack([combined_point_cloud, point_cloud_list[view_index]])
|
||||||
new_downsampled_combined_point_cloud = PtsUtil.voxel_downsample_point_cloud(new_combined_point_cloud,threshold)
|
new_downsampled_combined_point_cloud = PtsUtil.voxel_downsample_point_cloud(new_combined_point_cloud,threshold)
|
||||||
new_coverage, new_covered_num = ReconstructionUtil.compute_coverage_rate(downsampled_max_rec_pts, new_downsampled_combined_point_cloud, threshold)
|
new_coverage, new_covered_num = ReconstructionUtil.compute_coverage_rate(downsampled_max_rec_pts, new_downsampled_combined_point_cloud, threshold)
|
||||||
end = time.time()
|
|
||||||
#print(f"compute_coverage_rate Time: {end-start}")
|
|
||||||
coverage_increase = new_coverage - current_coverage
|
coverage_increase = new_coverage - current_coverage
|
||||||
if coverage_increase > best_coverage_increase:
|
if coverage_increase > best_coverage_increase:
|
||||||
best_coverage_increase = coverage_increase
|
best_coverage_increase = coverage_increase
|
||||||
@ -107,6 +107,100 @@ class ReconstructionUtil:
|
|||||||
best_combined_point_cloud = new_downsampled_combined_point_cloud
|
best_combined_point_cloud = new_downsampled_combined_point_cloud
|
||||||
|
|
||||||
|
|
||||||
|
if best_view is not None:
|
||||||
|
if best_coverage_increase <=1e-3 or best_covered_num - current_covered_num <= 5:
|
||||||
|
break
|
||||||
|
|
||||||
|
selected_views.append(best_view)
|
||||||
|
best_rec_pts_num = best_combined_point_cloud.shape[0]
|
||||||
|
print(f"Current rec pts num: {curr_rec_pts_num}, Best rec pts num: {best_rec_pts_num}, Best cover pts: {best_covered_num}, Max rec pts num: {max_rec_pts_num}")
|
||||||
|
print(f"Current coverage: {current_coverage+best_coverage_increase}, Best coverage increase: {best_coverage_increase}, Max Real coverage: {max_real_rec_pts_coverage}")
|
||||||
|
current_covered_num = best_covered_num
|
||||||
|
curr_rec_pts_num = best_rec_pts_num
|
||||||
|
combined_point_cloud = best_combined_point_cloud
|
||||||
|
remaining_views.remove(best_view)
|
||||||
|
history_indices.append(scan_points_indices_list[best_view])
|
||||||
|
current_coverage += best_coverage_increase
|
||||||
|
cnt_processed_view += 1
|
||||||
|
if status_info is not None:
|
||||||
|
sm = status_info["status_manager"]
|
||||||
|
app_name = status_info["app_name"]
|
||||||
|
runner_name = status_info["runner_name"]
|
||||||
|
sm.set_status(app_name, runner_name, "current coverage", current_coverage)
|
||||||
|
sm.set_progress(app_name, runner_name, "processed view", cnt_processed_view, len(point_cloud_list))
|
||||||
|
|
||||||
|
view_sequence.append((best_view, current_coverage))
|
||||||
|
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
if status_info is not None:
|
||||||
|
sm = status_info["status_manager"]
|
||||||
|
app_name = status_info["app_name"]
|
||||||
|
runner_name = status_info["runner_name"]
|
||||||
|
sm.set_progress(app_name, runner_name, "processed view", len(point_cloud_list), len(point_cloud_list))
|
||||||
|
return view_sequence, remaining_views, combined_point_cloud
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def compute_next_best_view_sequence_with_normal(target_point_cloud, target_normal, point_cloud_list, normal_list, scan_points_indices_list, threshold=0.01, overlap_area_threshold=25, init_view = 0, scan_points_threshold=5, status_info=None):
|
||||||
|
selected_views = [init_view]
|
||||||
|
combined_point_cloud = point_cloud_list[init_view]
|
||||||
|
combined_normal = normal_list[init_view]
|
||||||
|
history_indices = [scan_points_indices_list[init_view]]
|
||||||
|
|
||||||
|
max_rec_pts = np.vstack(point_cloud_list)
|
||||||
|
max_rec_nrm = np.vstack(normal_list)
|
||||||
|
downsampled_max_rec_pts, idx = PtsUtil.voxel_downsample_point_cloud(max_rec_pts, threshold, require_idx=True)
|
||||||
|
downsampled_max_rec_nrm = max_rec_nrm[idx]
|
||||||
|
max_rec_pts_num = downsampled_max_rec_pts.shape[0]
|
||||||
|
try:
|
||||||
|
max_real_rec_pts_coverage, _ = ReconstructionUtil.compute_coverage_rate_with_normal(target_point_cloud, downsampled_max_rec_pts, target_normal, downsampled_max_rec_nrm, threshold)
|
||||||
|
except:
|
||||||
|
import ipdb; ipdb.set_trace()
|
||||||
|
|
||||||
|
new_coverage, new_covered_num = ReconstructionUtil.compute_coverage_rate_with_normal(downsampled_max_rec_pts, combined_point_cloud, downsampled_max_rec_nrm, combined_normal, threshold)
|
||||||
|
current_coverage = new_coverage
|
||||||
|
current_covered_num = new_covered_num
|
||||||
|
|
||||||
|
remaining_views = list(range(len(point_cloud_list)))
|
||||||
|
view_sequence = [(init_view, current_coverage)]
|
||||||
|
cnt_processed_view = 0
|
||||||
|
remaining_views.remove(init_view)
|
||||||
|
curr_rec_pts_num = combined_point_cloud.shape[0]
|
||||||
|
|
||||||
|
while remaining_views:
|
||||||
|
best_view = None
|
||||||
|
best_coverage_increase = -1
|
||||||
|
best_combined_point_cloud = None
|
||||||
|
best_combined_normal = None
|
||||||
|
best_covered_num = 0
|
||||||
|
|
||||||
|
for view_index in remaining_views:
|
||||||
|
if point_cloud_list[view_index].shape[0] == 0:
|
||||||
|
continue
|
||||||
|
if selected_views:
|
||||||
|
new_scan_points_indices = scan_points_indices_list[view_index]
|
||||||
|
if not ReconstructionUtil.check_scan_points_overlap(history_indices, new_scan_points_indices, scan_points_threshold):
|
||||||
|
curr_overlap_area_threshold = overlap_area_threshold
|
||||||
|
else:
|
||||||
|
curr_overlap_area_threshold = overlap_area_threshold * 0.5
|
||||||
|
|
||||||
|
if not ReconstructionUtil.check_overlap(point_cloud_list[view_index], combined_point_cloud, overlap_area_threshold = curr_overlap_area_threshold, voxel_size=threshold):
|
||||||
|
continue
|
||||||
|
|
||||||
|
new_combined_point_cloud = np.vstack([combined_point_cloud, point_cloud_list[view_index]])
|
||||||
|
new_combined_normal = np.vstack([combined_normal, normal_list[view_index]])
|
||||||
|
new_downsampled_combined_point_cloud, idx = PtsUtil.voxel_downsample_point_cloud(new_combined_point_cloud,threshold, require_idx=True)
|
||||||
|
new_downsampled_combined_normal = new_combined_normal[idx]
|
||||||
|
new_coverage, new_covered_num = ReconstructionUtil.compute_coverage_rate_with_normal(downsampled_max_rec_pts, new_downsampled_combined_point_cloud, downsampled_max_rec_nrm, new_downsampled_combined_normal, threshold)
|
||||||
|
coverage_increase = new_coverage - current_coverage
|
||||||
|
if coverage_increase > best_coverage_increase:
|
||||||
|
best_coverage_increase = coverage_increase
|
||||||
|
best_view = view_index
|
||||||
|
best_covered_num = new_covered_num
|
||||||
|
best_combined_point_cloud = new_downsampled_combined_point_cloud
|
||||||
|
best_combined_normal = new_downsampled_combined_normal
|
||||||
|
|
||||||
|
|
||||||
if best_view is not None:
|
if best_view is not None:
|
||||||
if best_coverage_increase <=1e-3 or best_covered_num - current_covered_num <= 5:
|
if best_coverage_increase <=1e-3 or best_covered_num - current_covered_num <= 5:
|
||||||
break
|
break
|
||||||
@ -118,6 +212,7 @@ class ReconstructionUtil:
|
|||||||
current_covered_num = best_covered_num
|
current_covered_num = best_covered_num
|
||||||
curr_rec_pts_num = best_rec_pts_num
|
curr_rec_pts_num = best_rec_pts_num
|
||||||
combined_point_cloud = best_combined_point_cloud
|
combined_point_cloud = best_combined_point_cloud
|
||||||
|
combined_normal = best_combined_normal
|
||||||
remaining_views.remove(best_view)
|
remaining_views.remove(best_view)
|
||||||
history_indices.append(scan_points_indices_list[best_view])
|
history_indices.append(scan_points_indices_list[best_view])
|
||||||
current_coverage += best_coverage_increase
|
current_coverage += best_coverage_increase
|
||||||
|
68
utils/vis.py
68
utils/vis.py
@ -47,6 +47,42 @@ class visualizeUtil:
|
|||||||
all_combined_pts = np.vstack(all_combined_pts)
|
all_combined_pts = np.vstack(all_combined_pts)
|
||||||
downsampled_all_pts = PtsUtil.voxel_downsample_point_cloud(all_combined_pts, 0.001)
|
downsampled_all_pts = PtsUtil.voxel_downsample_point_cloud(all_combined_pts, 0.001)
|
||||||
np.savetxt(os.path.join(output_dir, "all_combined_pts.txt"), downsampled_all_pts)
|
np.savetxt(os.path.join(output_dir, "all_combined_pts.txt"), downsampled_all_pts)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def save_seq_cam_pos_and_cam_axis(root, scene, frame_idx_list, output_dir):
|
||||||
|
all_cam_pos = []
|
||||||
|
all_cam_axis = []
|
||||||
|
for i in frame_idx_list:
|
||||||
|
path = DataLoadUtil.get_path(root, scene, i)
|
||||||
|
cam_info = DataLoadUtil.load_cam_info(path, binocular=True)
|
||||||
|
cam_pose = cam_info["cam_to_world"]
|
||||||
|
cam_pos = cam_pose[:3, 3]
|
||||||
|
cam_axis = cam_pose[:3, 2]
|
||||||
|
|
||||||
|
num_samples = 10
|
||||||
|
sample_points = [cam_pos + 0.02*t * cam_axis for t in range(num_samples)]
|
||||||
|
sample_points = np.array(sample_points)
|
||||||
|
|
||||||
|
all_cam_pos.append(cam_pos)
|
||||||
|
all_cam_axis.append(sample_points)
|
||||||
|
|
||||||
|
all_cam_pos = np.array(all_cam_pos)
|
||||||
|
all_cam_axis = np.array(all_cam_axis).reshape(-1, 3)
|
||||||
|
np.savetxt(os.path.join(output_dir, "seq_cam_pos.txt"), all_cam_pos)
|
||||||
|
np.savetxt(os.path.join(output_dir, "seq_cam_axis.txt"), all_cam_axis)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def save_seq_combined_pts(root, scene, frame_idx_list, output_dir):
|
||||||
|
all_combined_pts = []
|
||||||
|
for i in frame_idx_list:
|
||||||
|
path = DataLoadUtil.get_path(root, scene, i)
|
||||||
|
pts = DataLoadUtil.load_from_preprocessed_pts(path,"npy")
|
||||||
|
if pts.shape[0] == 0:
|
||||||
|
continue
|
||||||
|
all_combined_pts.append(pts)
|
||||||
|
all_combined_pts = np.vstack(all_combined_pts)
|
||||||
|
downsampled_all_pts = PtsUtil.voxel_downsample_point_cloud(all_combined_pts, 0.001)
|
||||||
|
np.savetxt(os.path.join(output_dir, "seq_combined_pts.txt"), downsampled_all_pts)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def save_target_mesh_at_world_space(
|
def save_target_mesh_at_world_space(
|
||||||
@ -120,18 +156,34 @@ class visualizeUtil:
|
|||||||
sampled_visualized_normal = np.array(sampled_visualized_normal).reshape(-1, 3)
|
sampled_visualized_normal = np.array(sampled_visualized_normal).reshape(-1, 3)
|
||||||
np.savetxt(os.path.join(output_dir, "target_pts.txt"), sampled_target_points)
|
np.savetxt(os.path.join(output_dir, "target_pts.txt"), sampled_target_points)
|
||||||
np.savetxt(os.path.join(output_dir, "target_normal.txt"), sampled_visualized_normal)
|
np.savetxt(os.path.join(output_dir, "target_normal.txt"), sampled_visualized_normal)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def save_pts_nrm(pts_nrm, output_dir):
|
||||||
|
pts = pts_nrm[:, :3]
|
||||||
|
nrm = pts_nrm[:, 3:]
|
||||||
|
visualized_nrm = []
|
||||||
|
num_samples = 10
|
||||||
|
for i in range(len(pts)):
|
||||||
|
visualized_nrm.append(pts[i] + 0.02*t * nrm[i] for t in range(num_samples))
|
||||||
|
visualized_nrm = np.array(visualized_nrm).reshape(-1, 3)
|
||||||
|
np.savetxt(os.path.join(output_dir, "nrm.txt"), visualized_nrm)
|
||||||
|
np.savetxt(os.path.join(output_dir, "pts.txt"), pts)
|
||||||
|
|
||||||
|
|
||||||
# ------ Debug ------
|
# ------ Debug ------
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
root = r"/home/yan20/nbv_rec/project/franka_control/temp"
|
root = r"C:\Document\Local Project\nbv_rec\nbv_reconstruction\temp"
|
||||||
model_dir = r"H:\\AI\\Datasets\\scaled_object_box_meshes"
|
model_dir = r"H:\\AI\\Datasets\\scaled_object_box_meshes"
|
||||||
scene = "cad_model_world"
|
scene = "box"
|
||||||
output_dir = r"/home/yan20/nbv_rec/project/franka_control/temp/output"
|
output_dir = r"C:\Document\Local Project\nbv_rec\nbv_reconstruction\test"
|
||||||
|
|
||||||
visualizeUtil.save_all_cam_pos_and_cam_axis(root, scene, output_dir)
|
#visualizeUtil.save_all_cam_pos_and_cam_axis(root, scene, output_dir)
|
||||||
visualizeUtil.save_all_combined_pts(root, scene, output_dir)
|
# visualizeUtil.save_all_combined_pts(root, scene, output_dir)
|
||||||
visualizeUtil.save_target_mesh_at_world_space(root, model_dir, scene)
|
# visualizeUtil.save_seq_combined_pts(root, scene, [0, 121, 286, 175, 111,366,45,230,232,225,255,17,199,78,60], output_dir)
|
||||||
#visualizeUtil.save_points_and_normals(root, scene,"10", output_dir, binocular=True)
|
# visualizeUtil.save_seq_cam_pos_and_cam_axis(root, scene, [0, 121, 286, 175, 111,366,45,230,232,225,255,17,199,78,60], output_dir)
|
||||||
|
# visualizeUtil.save_target_mesh_at_world_space(root, model_dir, scene)
|
||||||
|
#visualizeUtil.save_points_and_normals(root, scene,"10", output_dir, binocular=True)
|
||||||
|
pts_nrm = np.loadtxt(r"C:\Document\Local Project\nbv_rec\nbv_reconstruction\pts_nrm_target.txt")
|
||||||
|
visualizeUtil.save_pts_nrm(pts_nrm, output_dir)
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user