小说网站首页模板网站建设投标书范本
- 作者: 五速梦信息网
- 时间: 2026年04月20日 07:08
当前位置: 首页 > news >正文
小说网站首页模板,网站建设投标书范本,申请企业邮箱需要什么,大朗网站建设公司文章目录 概述下载配置安装安装后生成的文件使用和维护Harbor参考资料 概述 Harbor是一个开源注册中心#xff0c;它使用策略和基于角色的访问控制来保护工件#xff0c;确保镜像被扫描并且没有漏洞#xff0c;并将镜像签名为可信的。Harbor是CNCF的一个毕业项目#xff0… 文章目录 概述下载配置安装安装后生成的文件使用和维护Harbor参考资料 概述 Harbor是一个开源注册中心它使用策略和基于角色的访问控制来保护工件确保镜像被扫描并且没有漏洞并将镜像签名为可信的。Harbor是CNCF的一个毕业项目它提供了合规性、性能和互操作性帮助你在Kubernetes和Docker等云原生计算平台上一致、安全地管理工件。Harbor可以安装在任何Kubernetes环境或支持Docker的系统上。 下载 下载最新的Harbor离线安装包。选择适合你操作系统的版本 一般是.tar.gz格式的压缩包。例如下载后的文件名为harbor - offline - installer - v2.5.0.tgz 下载地址 将下载的安装包传输到服务器上例如使用scp在Linux系统之间或者WinSCP在Windows和Linux之间等工具。 配置 解压例如tar -zxvf harbor - offline - installer - v2.5.0.tgz 进入解压后的harbor目录其中包含一个harbor.yml.tmpl文件这是Harbor的配置模板文件。 复制一份配置模板文件并命名为harbor.yml例如 cp harbor.yml.tmpl harbor.yml 编辑harbor.yml文件主要配置项包括 主机名hostname将hostname设置为服务器的IP地址或域名如hostname: 192.168.1.100。如果要使用HTTPS还需要配置证书相关选项。 存储路径data_volume可以修改数据存储的路径默认路径是/data例如可以将其更改为/mnt/harbor - data即data_volume: /mnt/harbor - data。 账户密码harbor_admin_password设置Harbor管理员账户的密码如harbor_admin_password: your - password。 其他配置可选 如果需要配置外部数据库默认使用内置数据库、LDAP认证等可以在harbor.yml文件中按照官方文档的说明进行配置。 详细配置见以下内容
Configuration file of Harbor# The IP address or hostname to access admin UI and registry service.
DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: reg.mydomain.com# http related config http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80# https related config 如果不需要https要注释掉不然会报错 https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /your/certificate/pathprivate_key: /your/private/key/path# enable strong ssl ciphers (default: false)# strong_ssl_ciphers: false# # Harbor will set ipv4 enabled only by default if this block is not configured
# Otherwise, please uncomment this block to configure your own ip_family stacks
ip_family:
# ipv6Enabled set to true if ipv6 is enabled in docker network, currently it affected the nginx related component
ipv6:
enabled: false
# ipv4Enabled set to true by default, currently it affected the nginx related component
ipv4:
enabled: true# # Uncomment following will enable tls communication between all harbor components
internal_tls:
# set enabled to true means internal tls is enabled
enabled: true
# put your cert and key files on dir
dir: /etc/harbor/tls/internal# Uncomment external_url if you want to enable external proxy
And when it enabled the hostname will no longer used
external_url: https://reg.mydomain.com:8433# The initial password of Harbor admin
It only works in first time to install harbor
Remember Change the admin password from UI after launching Harbor.
UI控制台密码 默认用户名admin
harbor_admin_password: Harbor12345# Harbor DB configuration database:# The password for the root user of Harbor DB. Change this before any production use.password: root123# The maximum number of connections in the idle connection pool. If it 0, no idle connections are retained.max_idle_conns: 100# The maximum number of open connections to the database. If it 0, then there is no limit on the number of open connections.# Note: the default number of connections is 1024 for postgres of harbor.max_open_conns: 900# The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it 0, connections are not closed due to a connections age.# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms, -1.5h or 2h45m. Valid time units are ns, us (or µs), ms, s, m, h.conn_max_lifetime: 5m# The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it 0, connections are not closed due to a connections idle time.# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms, -1.5h or 2h45m. Valid time units are ns, us (or µs), ms, s, m, h.conn_max_idle_time: 0# 挂载目录
The default data volume
data_volume: /data# Harbor Storage settings by default is using /data dir on local filesystem
Uncomment storage_service setting If you want to using external storage
storage_service:
# ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
# of registrys containers. This is usually needed when the user hosts a internal storage with self signed certificate.
ca_bundle:# # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
# for more info about this configuration please refer https://distribution.github.io/distribution/about/configuration/
# and https://distribution.github.io/distribution/storage-drivers/
filesystem:
maxthreads: 100
# set disable to true when you want to disable registry redirect
redirect:
disable: false# Trivy configuration
#
Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it
should download a newer version from the Internet or use the cached one. Currently, the database is updated every
12 hours and published as a new release to GitHub.
trivy:# ignoreUnfixed The flag to display only fixed vulnerabilitiesignore_unfixed: false# skipUpdate The flag to enable or disable Trivy DB downloads from GitHub## You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.# If the flag is enabled you have to download the trivy-offline.tar.gz archive manually, extract trivy.db and# metadata.json files and mount them in the /home/scanner/.cache/trivy/db path.skip_update: false## skipJavaDBUpdate If the flag is enabled you have to manually download the trivy-java.db file and mount it in the# /home/scanner/.cache/trivy/java-db/trivy-java.db pathskip_java_db_update: false## The offline_scan option prevents Trivy from sending API requests to identify dependencies.# Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.# For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesnt# exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.# It would work if all the dependencies are in local.# This option doesnt affect DB download. You need to specify skip-update as well as offline-scan in an air-gapped environment.offline_scan: false## Comma-separated list of what security issues to detect. Possible values are vuln, config and secret. Defaults to vuln.security_check: vuln## insecure The flag to skip verifying registry certificateinsecure: false## timeout The duration to wait for scan completion.# There is upper bound of 30 minutes defined in scan job. So if this timeout is larger than 30m0s, it will also timeout at 30m0s.timeout: 5m0s## github_token The GitHub access token to download Trivy DB## Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough# for production operations. If, for any reason, its not enough, you could increase the rate limit to 5000# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult# https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting## You can create a GitHub token by following the instructions in# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line## github_token: xxxjobservice:# Maximum number of job workers in job servicemax_job_workers: 10# The jobLoggers backend name, only support STD_OUTPUT, FILE and/or DBjob_loggers:- STD_OUTPUT- FILE# - DB# The jobLogger sweeper duration (ignored if jobLogger is stdout)logger_sweeper_duration: 1 #daysnotification:# Maximum retry count for webhook jobwebhook_job_max_retry: 3# HTTP client timeout for webhook jobwebhook_job_http_client_timeout: 3 #seconds# Log configurations log:# options are debug, info, warning, error, fatallevel: info# configs for logs in local storagelocal:# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.rotate_count: 50# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G# are all valid.rotate_size: 200M# The directory on your host that store loglocation: /var/log/harbor# Uncomment following lines to enable external syslog endpoint.# external_endpoint:# # protocol used to transmit log to external endpoint, options is tcp or udp# protocol: tcp# # The host of external endpoint# host: localhost# # Port of external endpoint# port: 5140#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY! _version: 2.11.0# Uncomment external_database if using external database.
external_database:
harbor:
host: harbor_db_host
port: harbor_db_port
db_name: harbor_db_name
username: harbor_db_username
password: harbor_db_password
ssl_mode: disable
max_idle_conns: 2
max_open_conns: 0# Uncomment redis if need to customize redis db
redis:
# db_index 0 is for core, its unchangeable
# registry_db_index: 1
# jobservice_db_index: 2
# trivy_db_index: 5
# its optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# harbor_db_index: 6
# its optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# cache_layer_db_index: 7# Uncomment external_redis if using external Redis server
external_redis:
# support redis, redissentinel
# host for redis: host_redis:port_redis
# host for redissentinel:
# host_sentinel1:port_sentinel1,host_sentinel2:port_sentinel2,host_sentinel3:port_sentinel3
host: redis:6379
password:
# Redis AUTH command was extended in Redis 6, it is possible to use it in the two-arguments AUTH username password form.
# theres a known issue when using external redis username ref:https://github.com/goharbor/harbor/issues/18892
# if you care about the image pull/push performance, please refer to this https://github.com/goharbor/harbor/wiki/Harbor-FAQs#external-redis-username-password-usage
# username:
# sentinel_master_set must be set to support redissentinel
#sentinel_master_set:
# db_index 0 is for core, its unchangeable
registry_db_index: 1
jobservice_db_index: 2
trivy_db_index: 5
idle_timeout_seconds: 30
# its optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# harbor_db_index: 6
# its optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# cache_layer_db_index: 7# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
uaa:
ca_file: /path/to/ca# Global proxy
Config http proxy for components, e.g. http://my.proxy.com:3128
Components doesnt need to connect to each others via http proxy.
Remove component from components array if want disable proxy
for it. If you want use proxy for replication, MUST enable proxy
for core and jobservice, and set http_proxy and https_proxy.
Add domain to the no_proxy field, when you want disable proxy
for some special registry.
proxy:http_proxy:https_proxy:no_proxy:components:- core- jobservice- trivy# metric:
enabled: false
port: 9090
path: /metrics# Trace related config
only can enable one trace provider(jaeger or otel) at the same time,
and when using jaeger as provider, can only enable it with agent mode or collector mode.
if using jaeger collector mode, uncomment endpoint and uncomment username, password if needed
if using jaeger agetn mode uncomment agent_host and agent_port
trace:
enabled: true
# set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth
sample_rate: 1
# # namespace used to differentiate different harbor services
# namespace:
# # attributes is a key value dict contains user defined attributes used to initialize trace provider
# attributes:
# application: harbor
# # jaeger should be 1.26 or newer.
# jaeger:
# endpoint: http://hostname:14268/api/traces
# username:
# password:
# agent_host: hostname
# # export trace data by jaeger.thrift in compact mode
# agent_port: 6831
# otel:
# endpoint: hostname:4318
# url_path: /v1/traces
# compression: false
# insecure: true
# # timeout is in seconds
# timeout: 10# Enable purge _upload directories
upload_purging:enabled: true# remove files in _upload directories which exist for a period of time, default is one week.age: 168h# the interval of the purge operationsinterval: 24hdryrun: false# Cache layer configurations
If this feature enabled, harbor will cache the resource
project/project_metadata/repository/artifact/manifest in the redis
which can especially help to improve the performance of high concurrent
manifest pulling.
NOTICE
If you are deploying Harbor in HA mode, make sure that all the harbor
instances have the same behaviour, all with caching enabled or disabled,
otherwise it can lead to potential data inconsistency.
cache:# not enabled by defaultenabled: false# keep cache for one day by defaultexpire_hours: 24# Harbor core configurations
Uncomment to enable the following harbor core related configuration items.
core:
# The provider for updating project quota(usage), there are 2 options, redis or db,
# by default is implemented by db but you can switch the updation via redis which
# can improve the performance of high concurrent pushing to the same project,
# and reduce the database connections spike and occupies.
# By redis will bring up some delay for quota usage updation for display, so only
# suggest switch provider to redis if you were ran into the db connections spike around
# the scenario of high concurrent pushing to same project, no improvement for other scenes.
quota_update_provider: redis # Or db安装 执行安装脚本 在harbor目录下执行./install.sh脚本。这个脚本会根据harbor.yml文件中的配置来启动Harbor相关的容器。安装过程可能需要一些时间因为它需要下载和启动多个容器组件如Harbor的核心服务、数据库、Redis等。 验证安装 当安装完成后可以通过浏览器访问配置的hostname和端口来访问Harbor的Web界面。例如在浏览器中输入http://192.168.1.100:80根据实际配置的主机名和端口如果能够看到Harbor的登录界面说明部署成功。可以使用默认的管理员账号admin和在harbor.yml文件中设置的密码进行登录然后开始使用Harbor来管理容器镜像。
安装后生成的文件 访问管理控制台 http://192.168.79.30/harbor/projects
使用和维护Harbor
推送和拉取镜像 在客户端机器上需要将Harbor的服务器地址添加到Docker的信任列表中。例如如果Harbor的服务器是192.168.79.30:80可以在客户端机器的/etc/docker/daemon.json文件中添加以下内容如果文件不存在可以创建 {insecure - registries: [192.168.79.30:80]
}然后重启客户端机器的Docker服务。之后就可以将本地镜像推送到Harbor仓库例如docker push 192.168.79.30:80/your - image - name也可以从Harbor仓库拉取镜像docker pull 192.168.79.30:80/your - image - name。 备份和更新Harbor 定期备份Harbor的数据包括存储在data_volume路径下的数据以及数据库备份如果使用外部数据库。同时关注Harbor官方网站的更新信息按照官方的更新指南及时更新Harbor版本以确保系统的安全性和功能的完整性。
参考资料
https://goharbor.io/docs/2.12.0
- 上一篇: 小说网站快速做排名wordpress 数据库宕机
- 下一篇: 小说网站怎么做流量吗主流电商网站开发框架
相关文章
-
小说网站快速做排名wordpress 数据库宕机
小说网站快速做排名wordpress 数据库宕机
- 技术栈
- 2026年04月20日
-
小说网站开发需求分析聊城网站设计咨询
小说网站开发需求分析聊城网站设计咨询
- 技术栈
- 2026年04月20日
-
小说网站开发的实际意义app网站建设教程视频
小说网站开发的实际意义app网站建设教程视频
- 技术栈
- 2026年04月20日
-
小说网站怎么做流量吗主流电商网站开发框架
小说网站怎么做流量吗主流电商网站开发框架
- 技术栈
- 2026年04月20日
-
小说网站怎么做推广哈尔滨整站
小说网站怎么做推广哈尔滨整站
- 技术栈
- 2026年04月20日
-
小说网站做公众号好还是网站好专业深圳网站建设
小说网站做公众号好还是网站好专业深圳网站建设
- 技术栈
- 2026年04月20日






