Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
avatar
Wei Ji

weiji14

Geospatial Data Scientist. Towards cloud-native geospatial machine learning!

avatar

Wei Ji

weiji14

Geospatial Data Scientist. Towards cloud-native geospatial machine learning!

  • Development Seed

weiji14 created repository weiji14/cog3pio

1 month ago

weiji14 created repository weiji14/zen3geo

1 year ago

weiji14 synced and deleted reference dependabot/pip/jupyter-server-proxy-3.2.1 at weiji14/deepicedrain from mirror

2 years ago

weiji14 synced and deleted reference dependabot/pip/ipython-7.31.1 at weiji14/deepicedrain from mirror

2 years ago

weiji14 synced and deleted reference dependabot/pip/pillow-9.0.1 at weiji14/deepicedrain from mirror

2 years ago

weiji14 synced new reference dependabot/pip/ipython-7.31.1 to weiji14/deepicedrain from mirror

2 years ago

weiji14 synced commits to atlxi_dhdt_20210715 at weiji14/deepicedrain from mirror

  • 48f413149e :green_heart: Fix IndexError due to Lake Whillans 6 disappearing Since Subglacial Lake Whillans 6 'disappeared' in e5e91cd039e5800d7727498efa3176e61f65ea81, shouldn't use it in the integration test. Oh yes, time to get back into this ICESat-2 project! Running things on a new workstation now and there's some exciting new libraries so best to wrap this up! Fixed a `TypeError: __init__() got an unexpected keyword argument 'calc_core_sample_indices'` because of scikit-learn/cuml API differences. Also gitignoring the dask-worker-space folder and some of the vector data files. Had to bump llvmlite from 0.36.0 to 0.38.0 in the poetry.lock file to be in line with conda installed version and prevent "ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall". Similar business with d89186ec5dc189f1dda35618d2d504beb26c7a0d.

2 years ago

weiji14 synced commits to atlxi_dhdt_20210715 at weiji14/deepicedrain from mirror

  • 0cec859288 :alembic: Increase DBSCAN min_samples from 300 to 320 Reduces the active subglacial lake inventory count from 221 to 204. This gets rid of some most likely false positive lakes, in particular, a large one over the grounding zone of Pine Island Glacier.

2 years ago

weiji14 synced commits to atlxi_dhdt_20210715 at weiji14/deepicedrain from mirror

  • e5e91cd039 :package: Detect active subglacial lakes up to 20210715 Re-running the clustering algorithm to detect Antarctic subglacial lakes with ICESat-2 ATL11 data up to 20210715. There are now 221 potential active lakes compared to 193 before. Keeping the same DBSCAN hyperparameters as in the last run at ca79a32ac236b3d1baa378c74ad1b4c92c174151, specifically an eps of 3000 and min_samples of 300. On the Siple Coast, Lake WIX seems to have switched from filling to draining, and there have been a few more active subglacial lakes 'disappeared' such as Whillans 6, Lake 78, Kamb 5 and Kamb 7, Macayeal 4, etc. Really need to get the new dhdt_maxslp data from 16061b47e93c17c2e1bb4743f53bf7dfa281e404 to work, but it's a lot noisier than I thought and I haven't managed to work out the best DBSCAN parameters to use. So just using the classic dhdt_slope data and battle-tested DBSCAN parameters for now.

2 years ago

weiji14 synced commits to atlxi_dhdt_20210715 at weiji14/deepicedrain from mirror

  • 16061b47e9 :sparkles: Calculate max slope of elev time-series using dhdt_maxslp New `dhdt_maxslp` function to calculate the maximum rate of elevation change over time (dhdt) for any consecutive paired value within an ATL11 time-series. This allows us to capture the signal of sudden elevation changes, even over a long period of time (>2 years) when active subglacial lakes that filled up may have drained and reverted to the old elevation, resulting in little of a dhdt trend (a problem with the `nan_linregress` method used in dedd0f4b8ee8765e6ad594d5aba52dceacdf9b8a). This `dhdt_maxslp` function removes NaN values, does a rolling diff calculation, divides elevation over time to get dhdt values (for each consecutive pair), finds the index of the max absolute dhdt, and returns the maxslp result. Not too complicated, but it does take a bit of explaining so there's some ASCII art to help, and I'll make sure to produce a proper figure later.

2 years ago

weiji14 synced new reference atlxi_dhdt_20210715 to weiji14/deepicedrain from mirror

2 years ago

weiji14 synced commits to main at weiji14/deepicedrain from mirror

  • e3db4e0313 :arrow_up: Bump dask to 2021.10.0, dask-cuda to 21.12.0a211025, dask-labextension to 5.1.0 Bumps [dask](https://github.com/dask/dask) from 2021.5.1 to 2021.10.0. - [Release notes](https://github.com/dask/dask/releases) - [Changelog](https://github.com/dask/dask/blob/master/docs/release-procedure.md) - [Commits](https://github.com/dask/dask/compare/2021.5.1...2021.10.0) Bumps [dask-cuda](https://github.com/rapidsai/dask-cuda) from 21.6.0 to 21.12.0a211025. - [Release notes](https://github.com/rapidsai/dask-cuda/releases) - [Changelog](https://github.com/rapidsai/dask-cuda/blob/branch-21.12/CHANGELOG.md) - [Commits](https://github.com/rapidsai/dask-cuda/compare/v21.06.00...v21.12.00a) Bumps [dask-labextension](https://github.com/dask/dask-labextension) from 5.0.2 to 5.1.0. - [Release notes](https://github.com/dask/dask-labextension/releases) - [Commits](https://github.com/dask/dask-labextension/compare/5.0.2...5.1.0) Also set distributed as an extra dependency under dask, and let dask-labextension be a dev-only dependency.

2 years ago

weiji14 synced and deleted reference dependabot/pip/dask-labextension-5.1.0 at weiji14/deepicedrain from mirror

2 years ago

weiji14 synced new reference update/atl06_20210715 to weiji14/deepicedrain from mirror

2 years ago

weiji14 synced commits to main at weiji14/deepicedrain from mirror

  • a617aa1585 :pushpin: Pin cupy to 9.5.0, pyarrow to 5.0.0 Downgrade cupy from 10.0.0a2 to 9.5.0, and upgrade pyarrow from 1.0.1 to 5.0.0 to fix some import errors.

2 years ago

weiji14 synced and deleted reference dependencies/poetry-1.1.11 at weiji14/deepicedrain from mirror

2 years ago

weiji14 synced new reference dependencies/poetry-1.1.11 to weiji14/deepicedrain from mirror

2 years ago

weiji14 synced commits to main at weiji14/ctcorenet from mirror

  • 5a59d8f3c7 :hankey: Implement predict mode for CTCoreDataset Part 1 of coding up prediction/inference logic for the CTCoreDataset, i.e. loading only the CT core image data with no mask labels. Will be used when calling `trainer.predict` later. Image has to be resized to (4096, 512) so that Unet model works (might need to handle 90deg rotated images too). Also storing the original shape in a list so that we can resize them back later.

2 years ago

weiji14 synced commits to main at weiji14/ctcorenet from mirror

  • bf41508007 :truck: Split out CTCoreDataModule from ctcoreunet.py Organizing the project a bit more by separating the data loading code from the neural network code. The CTCoreData class is further split into two classes - CTCoreDataset and CTCoreDataModule. This modular code structure will help with adding the prediction data loader and inference scripts later.

2 years ago

weiji14 synced commits to main at weiji14/ctcorenet from mirror

  • ca1d8b3715 :truck: Rename ctcorenet.py to ctcoreunet.py Needed to avoid some circular imports, see https://stackoverflow.com/questions/54333865/python-no-module-named-error-package-is-not-a-package. Also reduced max epochs in dvc.yaml to 1 to speed up continuous integration tests.
  • 33415983e2 :boom: Unet model with focal loss and IoU metric Using the Unet model architecture! Details are at https://github.com/mateuszbuda/brain-segmentation-pytorch. To make it perform better on the training dataset, I've swapped out the binary cross entropy loss function for a focal loss function. Manually tuned focal loss hyperparameters to have an alpha of 0.75 and gamma of 2, and increased Adam optimizer learning rate from 0.001 to 0.01. Added an Intersection over Union (IoU) metric to check performance. Also updated docs on how to train on 2 GPUs using distributed data parallel.
  • 7c46e79b37 :arrow_up: Bump pytorch to 1.9.1, pytorch-lightning to 1.4.8 Bumps [pytorch](https://github.com/pytorch/pytorch) from 1.9.0 to 1.9.1. - [Release notes](https://github.com/pytorch/pytorch/releases) - [Changelog](https://github.com/pytorch/pytorch/blob/master/RELEASE.md) - [Commits](https://github.com/pytorch/pytorch/commits) Bumps [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning) from 1.4.5 to 1.4.8. - [Release notes](https://github.com/PyTorchLightning/pytorch-lightning/releases) - [Changelog](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md) - [Commits](https://github.com/PyTorchLightning/pytorch-lightning/compare/1.4.5...1.4.8) Bumps [torchvision](https://github.com/pytorch/vision) from 0.10.0 to 0.10.1. - [Release notes](https://github.com/pytorch/vision/releases) - [Commits](https://github.com/pytorch/vision/commits) Note that cudatoolkit had to be downgraded from 11.2.2 to 11.1.1 because we're installing pytorch from the pytorch conda channel now.
  • View comparison for these 3 commits »

2 years ago