一个网站同时做竞价和seo企业网站的布局类型

当前位置: 首页 > news >正文

一个网站同时做竞价和seo,企业网站的布局类型,注册公司名称查询网站,php手机网站制作声明#xff1a;本文仅作为个人记录学习k8s过程的笔记。 节点规划#xff1a; 两台节点为阿里云ECS云服务器#xff0c;操作系统为centos7.9#xff0c;master为2v4GB,node为2v2GB,硬盘空间均为40GB。#xff08;节点基础配置不低于2V2GB#xff09; 主机名节点ip角色部…声明本文仅作为个人记录学习k8s过程的笔记。 节点规划 两台节点为阿里云ECS云服务器操作系统为centos7.9master为2v4GB,node为2v2GB,硬盘空间均为40GB。节点基础配置不低于2V2GB 主机名节点ip角色部署组件k8s-master172.23.83.164masteretcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannelk8s-node1172.23.83.165nodeskubectl, kubelet, kube-proxy, flannel 组件版本 查看发行版lsb_release -a 查看Linux内核uname -r 组件版本说明CentOS7.9.2009KernelLinux 3.10.0-1160.119.1.el7.x86_64etcd3.3.15使用容器方式部署默认数据挂载到本地路径coredns1.6.2kubeadmv1.16.2kubectlv1.16.2kubeletv1.16.2kube-proxyv1.16.2flannelv0.11.0 安装步骤 1.修改hostname解析所有节点 修改 hostname   hostnamectl set-hostname k8s-masterhostnamectl set-hostname k8s-node1 bash后立即生效 或 vim /etc/hostname (重启后生效 添加hosts解析 $ cat /etc/hostsEOF 172.23.83.164 k8s-master 172.23.83.165 k8s-node1 EOF 或vim /etc/hosts 添加 172.23.83.164 k8s-master 172.23.83.165 k8s-node1 2.调整系统配置所有节点 如果节点间无安全组限制内网机器间可以任意访问可以忽略否则至少保证如下端口可通 k8s-master节点TCP6443237923806008060081UDP协议端口全部打开 k8s-slave节点UDP协议端口全部打开 设置iptables iptables -P FORWARD ACCEPT 关闭swap swapoff -a

防止开机自动挂载 swap 分区

sed -i / swap / s/^(.)$/#\1/g /etc/fstab 关闭selinux和防火墙 sed -ri s#(SELINUX).#\1disabled# /etc/selinux/config setenforce 0 systemctl disable firewalld systemctl stop firewalld 修改内核参数 cat EOF  /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 net.ipv4.ip_forward1 vm.max_map_count262144 EOF modprobe br_netfilter 时间同步 yum install ntpdate -yntpdate time.windows.com 开启ipvs yum -y install ipset ipvsadm cat /etc/sysconfig/modules/ipvs.modules EOF #!/bin/bash modprobe – ip_vs modprobe – ip_vs_rr modprobe – ip_vs_wrr modprobe – ip_vs_sh modprobe – nf_conntrack_ipv4 EOF 授权、运行 chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack_ipv4 检查是否加载
lsmod | grep -e ipvs -e nf_conntrack_ipv4 设置yum 源 \( curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo \) curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo \( cat EOF /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttp://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck0 repo_gpgcheck0 gpgkeyhttp://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF \) yum clean all yum makecache 3.安装docker我安装是18.09.9最新版本的环境搭建不成功所有节点 ## 查看所有的可用版本 $ yum list docker-ce –showduplicates | sort -r ##安装旧版本 yum install docker-ce-cli-18.09.9-3.el7  docker-ce-18.09.9-3.el7

安装源里最新版本

$ yum install docker-ce

配置docker加速

$ mkdir -p /etc/docker vi /etc/docker/daemon.json {insecure-registries: [    172.23.83.164:5000 ],                          registry-mirrors : [https://8xpk5wnt.mirror.aliyuncs.com] }

启动docker

\( systemctl enable docker systemctl start docker 4.部署kubernetes 所有节点 \) yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 –disableexcludeskubernetes

查看kubeadm 版本

$ kubeadm version

设置kubelet开机启动

$ systemctl enable kubelet  初始化配置文件以下步骤均只在master执行 kubeadm config print init-defaults kubeadm.yaml 修改kubeadm.yaml文件内容如下修改api地址、镜像源、pod网段 — apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens:

  • groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 172.23.83.164 # apiserver地址因为单master所以配置master的节点内网IPbindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockname: k8s-mastertaints:- effect: NoSchedulekey: node-role.kubernetes.io/master — apiServer:timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers # 修改成阿里镜像源 kind: ClusterConfiguration kubernetesVersion: v1.18.0 networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16 # Pod 网段flannel插件需要使用这个网段serviceSubnet: 10.96.0.0/12 scheduler: {} 下载镜像 # 查看需要使用的镜像列表,若无问题将得到如下列表 \( kubeadm config images list --config kubeadm.yaml registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.0 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.0 registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.0 registry.aliyuncs.com/google_containers/kube-proxy:v1.16.0 registry.aliyuncs.com/google_containers/pause:3.1 registry.aliyuncs.com/google_containers/etcd:3.3.15-0 registry.aliyuncs.com/google_containers/coredns:1.6.2# 提前下载镜像到本地 \) kubeadm config images pull –config kubeadm.yaml [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.16.0 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.3.15-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.2 初始化master节点 kubeadm init –config kubeadm.yaml成功会出现下述信息 按提示执行  mkdir -p \(HOME/.kubesudo cp -i /etc/kubernetes/admin.conf \)HOME/.kube/configsudo chown \((id -u):\)(id -g) $HOME/.kube/config  node节点加入集群node节点 kubeadm join 172.23.83.164:6443 –token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:1d1fe362b31c06f38e1097fb6bbbf89cd4ad738cafc51d49ceb86e0654ab12b2 token有效期为24h如过期可参考下文进行添加 https://blog.csdn.net/weixin_58746210/article/details/139882088 此时执行kubectl get nodes所有节点都在notready状态没安装网络插件 5.安装flannel插件master 直接下载flannel的yml文件我没成功过 wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml 下载不成功使用浏览器打开直接复制vim kube-flannel.yml 粘贴即可但是要注意格式要正确 — apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: /etc/cni/net.d- pathPrefix: /etc/kube-flannel- pathPrefix: /run/flannelreadOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: [NET_ADMIN]defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unsed in CaaSPrule: RunAsAny — kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:name: flannel rules:- apiGroups: [extensions]resources: [podsecuritypolicies]verbs: [use]resourceNames: [psp.flannel.unprivileged]- apiGroups:- resources:- podsverbs:- get- apiGroups:- resources:- nodesverbs:- list- watch- apiGroups:- resources:- nodes/statusverbs:- patch — kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:name: flannel roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel subjects:

  • kind: ServiceAccountname: flannelnamespace: kube-system

    apiVersion: v1 kind: ServiceAccount

    metadata:name: flannelnamespace: kube-system

    kind: ConfigMap apiVersion: v1 metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel

    data:cni-conf.json: |{cniVersion: 0.2.0,name: cbr0,plugins: [{type: flannel,delegate: {hairpinMode: true,isDefaultGateway: true}},{type: portmap,capabilities: {portMappings: true}}]}net-conf.json: |{Network: 10.244.0.0/16,Backend: {Type: vxlan}}

    apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-ds-amd64namespace: kube-systemlabels:tier: nodeapp: flannel

    spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- amd64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-amd64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-amd64command:- /opt/bin/flanneldargs:- –ip-masq- –kube-subnet-mgrresources:requests:cpu: 100mmemory: 50Milimits:cpu: 100mmemory: 50MisecurityContext:privileged: falsecapabilities:add: [NET_ADMIN]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

    apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-ds-arm64namespace: kube-systemlabels:tier: nodeapp: flannel

    spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- arm64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-arm64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-arm64command:- /opt/bin/flanneldargs:- –ip-masq- –kube-subnet-mgrresources:requests:cpu: 100mmemory: 50Milimits:cpu: 100mmemory: 50MisecurityContext:privileged: falsecapabilities:add: [NET_ADMIN]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

    apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-ds-armnamespace: kube-systemlabels:tier: nodeapp: flannel

    spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- armhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-armcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-armcommand:- /opt/bin/flanneldargs:- –ip-masq- –kube-subnet-mgrresources:requests:cpu: 100mmemory: 50Milimits:cpu: 100mmemory: 50MisecurityContext:privileged: falsecapabilities:add: [NET_ADMIN]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

    apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-ds-ppc64lenamespace: kube-systemlabels:tier: nodeapp: flannel

    spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- ppc64lehostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-ppc64lecommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-ppc64lecommand:- /opt/bin/flanneldargs:- –ip-masq- –kube-subnet-mgrresources:requests:cpu: 100mmemory: 50Milimits:cpu: 100mmemory: 50MisecurityContext:privileged: falsecapabilities:add: [NET_ADMIN]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

    apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-ds-s390xnamespace: kube-systemlabels:tier: nodeapp: flannel spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- s390xhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-s390xcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-s390xcommand:- /opt/bin/flanneldargs:- –ip-masq- –kube-subnet-mgrresources:requests:cpu: 100mmemory: 50Milimits:cpu: 100mmemory: 50MisecurityContext:privileged: falsecapabilities:add: [NET_ADMIN]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg# 先拉取镜像,此过程国内速度比较慢 $ docker pull quay.io/coreos/flannel:v0.11.0-amd64

    执行flannel安装

    $ kubectl create -f kube-flannel.yml  6.验证集群是否搭建成功 如果成功节点状态全部为ready 如果不成功参考 http://t.csdnimg.cn/hP6Rk